ACL-OCL / Base_JSON /prefixD /json /D17 /D17-1017.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "D17-1017",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T16:14:05.358421Z"
},
"title": "OBJ2TEXT: Generating Visually Descriptive Language from Object Layouts",
"authors": [
{
"first": "Xuwang",
"middle": [],
"last": "Yin",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Virginia",
"location": {
"settlement": "Charlottesville",
"region": "VA"
}
},
"email": ""
},
{
"first": "Vicente",
"middle": [],
"last": "Ordonez",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Virginia",
"location": {
"settlement": "Charlottesville",
"region": "VA"
}
},
"email": "vicente]@virginia.edu"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Generating captions for images is a task that has recently received considerable attention. In this work we focus on caption generation for abstract scenes, or object layouts where the only information provided is a set of objects and their locations. We propose OBJ2TEXT, a sequence-tosequence model that encodes a set of objects and their locations as an input sequence using an LSTM network, and decodes this representation using an LSTM language model. We show that our model, despite encoding object layouts as a sequence, can represent spatial relationships between objects, and generate descriptions that are globally coherent and semantically relevant. We test our approach in a task of object-layout captioning by using only object annotations as inputs. We additionally show that our model, combined with a state-of-the-art object detector, improves an image captioning model from 0.863 to 0.950 (CIDEr score) in the test benchmark of the standard MS-COCO Captioning task.",
"pdf_parse": {
"paper_id": "D17-1017",
"_pdf_hash": "",
"abstract": [
{
"text": "Generating captions for images is a task that has recently received considerable attention. In this work we focus on caption generation for abstract scenes, or object layouts where the only information provided is a set of objects and their locations. We propose OBJ2TEXT, a sequence-tosequence model that encodes a set of objects and their locations as an input sequence using an LSTM network, and decodes this representation using an LSTM language model. We show that our model, despite encoding object layouts as a sequence, can represent spatial relationships between objects, and generate descriptions that are globally coherent and semantically relevant. We test our approach in a task of object-layout captioning by using only object annotations as inputs. We additionally show that our model, combined with a state-of-the-art object detector, improves an image captioning model from 0.863 to 0.950 (CIDEr score) in the test benchmark of the standard MS-COCO Captioning task.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Natural Language generation (NLG) is a long standing goal in natural language processing. There have already been several successes in applications such as financial reporting (Kukich, 1983; Smadja and McKeown, 1990) , or weather forecasts (Konstas and Lapata, 2012; Wen et al., 2015) , however it is still a challenging task for less structured and open domains. Given recent progress in training robust visual recognition models using convolutional neural networks, the task of generating natural language descriptions for ar- Figure 1 : Overview of our proposed model for generating visually descriptive language from object layouts. The input (a) is an object layout that consists of object categories and their corresponding bounding boxes, the encoder (b) uses a twostream recurrent neural network to encode the input object layout, and the decoder (c) uses a standard LSTM recurrent neural network to generate text.",
"cite_spans": [
{
"start": 176,
"end": 190,
"text": "(Kukich, 1983;",
"ref_id": "BIBREF15"
},
{
"start": 191,
"end": 216,
"text": "Smadja and McKeown, 1990)",
"ref_id": "BIBREF30"
},
{
"start": 240,
"end": 266,
"text": "(Konstas and Lapata, 2012;",
"ref_id": "BIBREF12"
},
{
"start": 267,
"end": 284,
"text": "Wen et al., 2015)",
"ref_id": "BIBREF34"
}
],
"ref_spans": [
{
"start": 529,
"end": 537,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "bitrary images has received considerable attention (Vinyals et al., 2015; Karpathy and Fei-Fei, 2015; Mao et al., 2015) . In general, generating visually descriptive language can be useful for various tasks such as human-machine communication, accessibility, image retrieval, and search. However this task is still challenging and it depends on developing both a robust visual recognition model, and a reliable language generation model. In this paper, we instead tackle a task of describing object layouts where the categories for the objects in an input scene and their corresponding locations are known. Object layouts are commonly used for story-boarding, sketching, and computer graphics applications. Additionally, using our object layout captioning model on the outputs of an object detector we are also able to improve image caption-ing models. Object layouts contain rich semantic information, however they also abstract away several other visual cues such as color, texture, and appearance, thus introducing a different set of challenges than those found in traditional image captioning.",
"cite_spans": [
{
"start": 51,
"end": 73,
"text": "(Vinyals et al., 2015;",
"ref_id": "BIBREF33"
},
{
"start": 74,
"end": 101,
"text": "Karpathy and Fei-Fei, 2015;",
"ref_id": "BIBREF10"
},
{
"start": 102,
"end": 119,
"text": "Mao et al., 2015)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We propose OBJ2TEXT, a sequence-tosequence model that encodes object layouts using an LSTM network (Hochreiter and Schmidhuber, 1997) , and decodes natural language descriptions using an LSTM-based neural language model 1 . Natural language generation systems usually consist of two steps: content planning, and surface realization. The first step decides on the content to be included in the generated text, and the second step connects the concepts using structural language properties. In our proposed model, OBJ2TEXT, content planning is performed by the encoder, and surface realization is performed by the decoder. Our model is trained in the standard MS-COCO dataset (Lin et al., 2014) , which includes both object annotations for the task of object detection, and textual descriptions for the task of image captioning. While most previous research has been devoted to any one of these two tasks, our paper presents, to our knowledge, the first approach for learning mappings between object annotations and textual descriptions. Using several lesioned versions of the proposed model we explored the effect of object counts and locations in the quality and accuracy of the generated natural language descriptions.",
"cite_spans": [
{
"start": 99,
"end": 133,
"text": "(Hochreiter and Schmidhuber, 1997)",
"ref_id": "BIBREF7"
},
{
"start": 674,
"end": 692,
"text": "(Lin et al., 2014)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Generating visually descriptive language requires beyond syntax, and semantics; an understanding of the physical word. We also take inspiration from recent work by Schmaltz et al. (2016) where the goal was to reconstruct a sentence from a bag-of-words (BOW) representation using a simple surface-level language model based on an encoder-decoder sequence-to-sequence architecture. In contrast to this previous approach, our model is grounded on visual data, and its corresponding spatial information, so it goes beyond word re-ordering. Also relevant to our work is Yao et al. (2016a) which previously explored the task of oracle image captioning by providing a language generation model with a list of manually defined visual concepts known to be present in the image. In addition, our model is able to leverage both quantity and spatial information as additional cues associated with each object/concept, thus allowing it to learn about verbosity, and spatial relations in a supervised fashion.",
"cite_spans": [
{
"start": 164,
"end": 186,
"text": "Schmaltz et al. (2016)",
"ref_id": "BIBREF28"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In summary, our contributions are as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 We demonstrate that despite encoding object layouts as a sequence using an LSTM, our model can still effectively capture spatial information for the captioning task. We perform ablation studies to measure the individual impact of object counts, and locations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 We show that a model relying only on object annotations as opposed to pixel data, performs competitively in image captioning despite the ambiguity of the setup for this task.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 We show that more accurate and comprehensive descriptions can be generated on the image captioning task by combining our OBJ2TEXT model using the outputs of a state-of-the-art object detector with a standard image captioning approach.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We evaluate OBJ2TEXT in the task of object layout captioning, and image captioning. In the first task, the input is an object layout that takes the form of a set of object categories and bounding box pairs o, l = { o i , l i }, and the output is natural language. This task resembles the second task of image captioning except that the input is an object layout instead of a standard raster image represented as a pixel array. We experiment in the MS-COCO dataset for both tasks. For the first task, object layouts are derived from ground-truth bounding box annotations, and in the second task object layouts are obtained using the outputs of an object detector over the input image.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Task",
"sec_num": "2"
},
{
"text": "Our work is related to previous works that used clipart scenes for visually-grounded tasks including sentence interpretation , and predicting object dynamics (Fouhey and . The cited advantage of abstract scene representations such as the ones provided by the clipart scenes dataset proposed in is their ability to separate the complexity of pattern recognition from semantic visual representation. Abstract scene representations also maintain common-sense knowledge about the world. The works of Vedantam et al. (2015b) ; Eysenbach et al. (2016) proposed methods to learn common-sense knowledge from clipart scenes, while the method of Yatskar et al. (2016) , similar to our work, leverages object annotations for natural images. Understanding abstract scenes has demonstrated to be a useful capability for both language and vision tasks and our work is another step in this direction.",
"cite_spans": [
{
"start": 496,
"end": 519,
"text": "Vedantam et al. (2015b)",
"ref_id": "BIBREF32"
},
{
"start": 522,
"end": 545,
"text": "Eysenbach et al. (2016)",
"ref_id": "BIBREF3"
},
{
"start": 636,
"end": 657,
"text": "Yatskar et al. (2016)",
"ref_id": "BIBREF38"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "3"
},
{
"text": "Our work is also related to other language generation tasks such as image and video captioning (Farhadi et al., 2010; Ordonez et al., 2011; Mason and Charniak, 2014; Ordonez et al., 2015; Donahue et al., 2015; Mao et al., 2015; Fang et al., 2015) . This problem is interesting because it combines two challenging but perhaps complementary tasks: visual recognition, and generating coherent language. Fueled by recent advances in training deep neural networks (Krizhevsky et al., 2012) and the availability of large annotated datasets with images and captions such as the MS-COCO dataset (Lin et al., 2014) , recent methods on this task perform endto-end learning from pixels to text. Most recent approaches use a variation of an encoderdecoder model where a convolutional neural network (CNN) extracts visual features from the input image (encoder), and passes its outputs to a recurrent neural network (RNN) that generates a caption as a sequence of words (decoder) (Karpathy and Fei-Fei, 2015; Vinyals et al., 2015) . However, the MS-COCO dataset, containing object annotations, is also a popular benchmark in computer vision for the task of object detection, where the objective is to go from pixels to a collection of object locations. In this paper, we instead frame our problem as going from a collection of object categories and locations (object layouts) to image captions. This requires proposing a novel encoding approach to encode these object layouts instead of pixels, and allows for analyzing the image captioning task from a different perspective. Several other recent works use a similar sequenceto-sequence approach to generate text from source code input (Iyer et al., 2016) , or to translate text from one language to another (Bahdanau et al., 2015) .",
"cite_spans": [
{
"start": 95,
"end": 117,
"text": "(Farhadi et al., 2010;",
"ref_id": "BIBREF5"
},
{
"start": 118,
"end": 139,
"text": "Ordonez et al., 2011;",
"ref_id": "BIBREF21"
},
{
"start": 140,
"end": 165,
"text": "Mason and Charniak, 2014;",
"ref_id": "BIBREF19"
},
{
"start": 166,
"end": 187,
"text": "Ordonez et al., 2015;",
"ref_id": "BIBREF20"
},
{
"start": 188,
"end": 209,
"text": "Donahue et al., 2015;",
"ref_id": "BIBREF1"
},
{
"start": 210,
"end": 227,
"text": "Mao et al., 2015;",
"ref_id": "BIBREF18"
},
{
"start": 228,
"end": 246,
"text": "Fang et al., 2015)",
"ref_id": "BIBREF4"
},
{
"start": 459,
"end": 484,
"text": "(Krizhevsky et al., 2012)",
"ref_id": "BIBREF14"
},
{
"start": 587,
"end": 605,
"text": "(Lin et al., 2014)",
"ref_id": "BIBREF16"
},
{
"start": 996,
"end": 1017,
"text": "Vinyals et al., 2015)",
"ref_id": "BIBREF33"
},
{
"start": 1673,
"end": 1692,
"text": "(Iyer et al., 2016)",
"ref_id": "BIBREF9"
},
{
"start": 1745,
"end": 1768,
"text": "(Bahdanau et al., 2015)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "3"
},
{
"text": "There have also been a few previous works explicitly analyzing the role of spatial and geometric relations between objects for vision and language related tasks. The work of Elliott and Keller (2013) manually defined a dictionary of objectobject relations based on geometric cues. The work of Ramisa et al. (2015) is focused on predicting preposition given two entities and their locations in an image. Previous works of Plummer et al. 2015and showed that switching from classification-based CNN network to detection-based Fast RCNN network improves performance for phrase localization. The work of showed that encoding image regions with spatial information is crucial for natural language object retrieval as the task explicitly asks for locations of target objects. Unlike these previous efforts, our model is trained endto-end for the language generation task, and takes as input a holistic view of the scene layout, potentially learning higher order relations between objects.",
"cite_spans": [
{
"start": 174,
"end": 199,
"text": "Elliott and Keller (2013)",
"ref_id": "BIBREF2"
},
{
"start": 293,
"end": 313,
"text": "Ramisa et al. (2015)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "3"
},
{
"text": "In this section we describe our base OBJ2TEXT model for encoding object layouts to produce text (section 4.1), as well as two further variations to use our model to generate captions for real images: OBJ2TEXT-YOLO which uses the YOLO object detector (Redmon and Farhadi, 2017) to generate layouts of object locations from real images (section 4.2), and OBJ2TEXT-YOLO + CNN-RNN which further combines the previous model with an encoder-decoder image captioning which uses a convolutional neural network to encode the image (section 4.3).",
"cite_spans": [
{
"start": 250,
"end": 276,
"text": "(Redmon and Farhadi, 2017)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": "4"
},
{
"text": "OBJ2TEXT is a sequence-to-sequence model that encodes an input object layout as a sequence, and decodes a textual description by predicting the next word at each time step. Given a training data set comprising N observations o (n) , l (n) , where o (n) , l (n) is a pair of sequences of input category and location vectors, together with a corresponding set of target captions s (n) , the encoder and decoder are trained jointly by minimizing a loss function over the training set using stochastic gradient descent:",
"cite_spans": [
{
"start": 235,
"end": 238,
"text": "(n)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "OBJ2TEXT",
"sec_num": "4.1"
},
{
"text": "W * = arg min W N n=1 L( o (n) , l (n) , s (n) ), (1) in which W = W 1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "OBJ2TEXT",
"sec_num": "4.1"
},
{
"text": "W 2 is the group of encoder parameters W 1 and decoder parameters W 2 . The loss function is a negative log likelihood function of the generated description given the encoded object layout",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "OBJ2TEXT",
"sec_num": "4.1"
},
{
"text": "L( o (n) , l (n) , s (n) ) = \u2212 log p(s n |h n L , W 2 ), (2) where h n",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "OBJ2TEXT",
"sec_num": "4.1"
},
{
"text": "L is computed using the LSTM-based encoder (eqs. 3, and 4) from the object layout inputs o (n) , l (n) , and p(s n |h n L , W 2 ) is computed using the LSTM-based decoder (eqs. 5, 6 and 7).",
"cite_spans": [
{
"start": 99,
"end": 102,
"text": "(n)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "OBJ2TEXT",
"sec_num": "4.1"
},
{
"text": "At inference time we encode an input layout o, l into its representation h L , and sample a sentence word by word based on p(s t |h L , s <t ) as computed by the decoder in time-step t. Finding the optimal sentence s * = arg max s p(s|h L ) requires the evaluation of an exponential number of sentences as in each time-step we have K number of choices for a word vocabulary of size K. As a common practice for an approximate solution, we follow (Vinyals et al., 2015) and use beam search to limit the choices for words at each time-step by only using the ones with the highest probabilities. Encoder: The encoder at each time-step t takes as input a pair o t , l t , where o t is the object category encoded as a one-hot vector of size V , and",
"cite_spans": [
{
"start": 445,
"end": 467,
"text": "(Vinyals et al., 2015)",
"ref_id": "BIBREF33"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "OBJ2TEXT",
"sec_num": "4.1"
},
{
"text": "l t = [B x t , B y t , B w t , B h t ]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "OBJ2TEXT",
"sec_num": "4.1"
},
{
"text": "is the location configuration vector that contains left-most position, topmost position, and the width and height of the bounding box corresponding to object o t , all normalized in the range [0,1] with respect to input image dimensions. o t and l t are mapped to vectors with the same size k and added to form the input x t to one time-step of the LSTM-based encoder as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "OBJ2TEXT",
"sec_num": "4.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "x t = W o o t + (W l l t + b l ), x t \u2208 R k ,",
"eq_num": "(3)"
}
],
"section": "OBJ2TEXT",
"sec_num": "4.1"
},
{
"text": "in which W o \u2208 R k\u00d7V is a categorical embedding matrix (the word encoder), and W l \u2208 R k\u00d74 and bias b l \u2208 R k are parameters of a linear transformation unit (the object location encoder). Setting initial value of cell state vector c e 0 = 0 and hidden state vector h e 0 = 0, the LSTM-based encoder takes the sequence of input (x 1 , ..., x T 1 ) and generates a sequence of hidden state vectors (h e 1 , ..., h e T 1 ) using the following step function (we omit cell state variables and internal transition gates for simplicity as we use a standard LSTM cell definition):",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "OBJ2TEXT",
"sec_num": "4.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "h e t = LSTM(h e t\u22121 , x t ; W 1 ).",
"eq_num": "(4)"
}
],
"section": "OBJ2TEXT",
"sec_num": "4.1"
},
{
"text": "We use the last hidden state vector h L = h e T 1 as the encoded representation of the input layout o t , l t to generate the corresponding description s.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "OBJ2TEXT",
"sec_num": "4.1"
},
{
"text": "Decoder: The decoder takes the encoded layout h L as input and generates a sequence of multinomial distributions over a vocabulary of words using an LSTM neural language model. The joint probability distribution of generated sentence s = (s 1 , ..., s T 2 ) is factorized into products of conditional probabilities:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "OBJ2TEXT",
"sec_num": "4.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "p(s|h L ) = T 2 t=1 p(s t |h L , s <t ),",
"eq_num": "(5)"
}
],
"section": "OBJ2TEXT",
"sec_num": "4.1"
},
{
"text": "where each factor is computed using a softmax function over the hidden states of the decoder LSTM as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "OBJ2TEXT",
"sec_num": "4.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "p(s t |h L , s <t ) = softmax(W h h d t\u22121 + b h ), (6) h d t = LSTM(h d t\u22121 , W s s t ; W 2 ),",
"eq_num": "(7)"
}
],
"section": "OBJ2TEXT",
"sec_num": "4.1"
},
{
"text": "where W s is the categorical embedding matrix for the one-hot encoded caption sequence of symbols. By setting h d \u22121 = 0 and c d \u22121 = 0 for the initial hidden state and cell state, the layout representation is encoded into the decoder network at the 0 time step as a regular input:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "OBJ2TEXT",
"sec_num": "4.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "h d 0 = LSTM(h d \u22121 , h L ; W 2 ).",
"eq_num": "(8)"
}
],
"section": "OBJ2TEXT",
"sec_num": "4.1"
},
{
"text": "We use beam search to sample from the LSTM as is routinely performed in previous literature in order to generate text.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "OBJ2TEXT",
"sec_num": "4.1"
},
{
"text": "For the task of image captioning we propose OBJ2TEXT-YOLO. This model takes an image as input, extracts an object layout (object categories and locations) with a state-of-the-art object detection model YOLO (Redmon and Farhadi, 2017) , and uses OBJ2TEXT as described in section 4.1 to generate a natural language description of the input layout and hence, the input image. The model is trained using the standard back-propagation algorithm, but the error is not back-propagated to the object detection module.",
"cite_spans": [
{
"start": 207,
"end": 233,
"text": "(Redmon and Farhadi, 2017)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "OBJ2TEXT-YOLO",
"sec_num": "4.2"
},
{
"text": "For the image captioning task we experiment with a combined model (see Figure 2) where we take an image as input, and then use two separate computation branches to extract visual feature information and object layout information. These two streams of information are then passed to an LSTM neural language model to generate a description. Visual features are extracted using the Figure 2 : Image Captioning by joint learning of visual features and object layout encoding.",
"cite_spans": [],
"ref_spans": [
{
"start": 71,
"end": 80,
"text": "Figure 2)",
"ref_id": null
},
{
"start": 379,
"end": 387,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "OBJ2TEXT-YOLO + CNN-RNN",
"sec_num": "4.3"
},
{
"text": "VGG-16 (Simonyan and Zisserman, 2015) convolutional neural network pre-trained on the Im-ageNet classification task (Russakovsky et al., 2015) . Object layouts are extracted using the YOLO object detection system and its output object locations are encoded using our proposed OBJ2TEXT encoder. These two streams of information are encoded into vectors of the same size and their sum is input to the language model to generate a textual description. The model is trained using the standard back-propagation algorithm where the error is back-propagated to both branches but not the object detection module. The weights of the image CNN model are fine-tuned only after the layout encoding branch is well trained but no significant overall performance improvements were observed.",
"cite_spans": [
{
"start": 116,
"end": 142,
"text": "(Russakovsky et al., 2015)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "OBJ2TEXT-YOLO + CNN-RNN",
"sec_num": "4.3"
},
{
"text": "We evaluate the proposed models on the MS-COCO (Lin et al., 2014) dataset which is a popular image captioning benchmark that also contains object extent annotations. In the object layout captioning task the model uses the groundtruth object extents as input object layouts, while in the image captioning task the model takes raw images as input. The qualities of generated descriptions are evaluated using both human evaluations and automatic metrics. We train and validate our models based on the commonly adopted split regime (113,287 training images, 5000 validation and 5000 test images) used in (Karpathy et al., 2016) , and also test our model in the MS-COCO official test benchmark.",
"cite_spans": [
{
"start": 47,
"end": 65,
"text": "(Lin et al., 2014)",
"ref_id": "BIBREF16"
},
{
"start": 600,
"end": 623,
"text": "(Karpathy et al., 2016)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "5"
},
{
"text": "We implement our models based on the open source image captioning system Neu-raltalk2 (Karpathy et al., 2016) . Other configurations including data preprocessing and training hyper-parameters also follow Neuraltalk2. We trained our models using a GTX1080 GPU with 8GB of memory for 400k iterations using a batch size of 16 and an Adam optimizer with alpha of 0.8, beta of 0.999 and epsilon of 1e-08. Descriptions of the CNN-RNN approach are generated using the publicly available code and model checkpoint provided by Neuraltalk2 (Karpathy et al., 2016) . Captions for online test set evaluations are generated using beam search of size 2, but score histories on split validation set are based on captions generated without beam search (i.e. max sampling at each time-step).",
"cite_spans": [
{
"start": 86,
"end": 109,
"text": "(Karpathy et al., 2016)",
"ref_id": null
},
{
"start": 530,
"end": 553,
"text": "(Karpathy et al., 2016)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "5"
},
{
"text": "Ablation on Object Locations and Counts:. We setup an experiment where we remove the input locations from the OBJ2TEXT encoder to study the effects on the generated captions, and confirm whether the model is actually using spatial information during surface realization. In this restricted version of our model the LSTM encoder at each time step only takes the object category embedding vector as input. The OBJ2TEXT model additionally encodes different instances of the same object category in different time steps, potentially encoding in some of its hidden states information about how many objects of a particular class are in the image. For example, in the object annotation presented in the input in Figure 1 , there are two instances of \"person\". We perform an additional experiment where our model does not have access neither to object locations, nor the number of object instances by providing only a set of object categories. Note that in this set of experiments the object layouts are given as inputs, thus we assume full access to ground-truth object annotations, even in the test split. In the experimental results section we use the \"-GT\" postfix to indicate that input object layouts are obtained from ground-truth object annotations provided by the MS-COCO dataset.",
"cite_spans": [],
"ref_spans": [
{
"start": 706,
"end": 714,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "5"
},
{
"text": "Image Captioning Experiment: In this experiment we assess whether the image captioning model OBJ2TEXT-YOLO that only relies on object categories and locations could give comparable performance with a CNN-RNN model based on Neuraltalk2 (Karpathy et al., 2016 ) that has full access to visual image features. We also explore how much does a combined OBJ2TEXT-YOLO + CNN-RNN model could improve over a CNN-RNN model by fusing object counts and location information that is not explicitly encoded in a traditional CNN-RNN approach.",
"cite_spans": [
{
"start": 235,
"end": 257,
"text": "(Karpathy et al., 2016",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "5"
},
{
"text": "Human Evaluation Protocol. We use a twoalternative forced-choice evaluation (2AFC) ap-proach to compare two methods that generate captions. For this, we setup a task on Amazon Mechanical Turk where users are presented with an image and two alternative captions, and they have to choose the caption that best describes the image. Users are not prompted to use any single criteria but rather a holistic assessment of the captions, including their semantics, syntax, and the degree to which they describe the image content. In our experiment we randomly sample 500 captions generated by various models for MS COCO online test set images, and use three users per image to obtain annotations. Note that three users choosing randomly between two options have a chance of 25% to select the same caption for a given image. In our experiments comparing method A vs method B, we report the percentage of times A was picked over B (Choice-all), the percentage of times all users selected the same method, either A or B, (Agreement), and the percentage of times A was picked over B only for these cases where all users agreed (Choice-agreement).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "5"
},
{
"text": "Impact of Object Locations and Counts: Figure 3a shows the CIDEr (Vedantam et al., 2015a) , and BLEU-4 (Papineni et al., 2002) score history on our validation set during 400k iterations of training of OBJ2TEXT, as well as a version of our model that does not use object locations, and a version of our model that does not use neither object locations nor object counts. These results show that our model is effectively using both object locations and counts to generate better captions, and absence of any one of these two cues affects performance. Table 1 confirms these results on the test split after a full round of training.",
"cite_spans": [
{
"start": 65,
"end": 89,
"text": "(Vedantam et al., 2015a)",
"ref_id": "BIBREF31"
},
{
"start": 103,
"end": 126,
"text": "(Papineni et al., 2002)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [
{
"start": 39,
"end": 48,
"text": "Figure 3a",
"ref_id": "FIGREF2"
},
{
"start": 549,
"end": 556,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "6"
},
{
"text": "Furthermore, human evaluation results in the first row of Table 2 show that the OBJ2TEXT model with access to object locations is preferred by users, especially in cases where all evaluators agreed on their choice (62% over the baseline that does not have access to locations). In Figure 4 we additionally present qualitative examples showing predictions side-by-side between OBJ2TEXT-GT and OBJ2TEXT-GT (no obj-locations). These results indicate that 1) perhaps not surprisingly, object counts is useful for generating better quality descriptions, and 2) object location information when properly encoded, is an important cue for generating more accurate descriptions. We ad-ditionally implemented a nearest neighbor baseline by representing the objects in the input layout using an orderless bag-of-words representation of object counts and the CIDEr score on the test split was only 0.387.",
"cite_spans": [],
"ref_spans": [
{
"start": 58,
"end": 65,
"text": "Table 2",
"ref_id": "TABREF3"
},
{
"start": 281,
"end": 289,
"text": "Figure 4",
"ref_id": "FIGREF4"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "6"
},
{
"text": "On top of OBJ2TEXT we additionally experimented with the global attention model proposed in (Luong et al., 2015) so that a weighted combination of the encoder hidden states are forwarded to the decoding neural language model, however we did not notice any overall gains in terms of accuracy from this formulation. We observed that this model provided gains only for larger input sequences where it is more likely that the LSTM network forgets its past history (Bahdanau et al., 2015) . However in MS-COCO the average number of objects in each image is rather modest, so the last hidden state can capture well the overall nuances of the visual input.",
"cite_spans": [
{
"start": 92,
"end": 112,
"text": "(Luong et al., 2015)",
"ref_id": "BIBREF17"
},
{
"start": 460,
"end": 483,
"text": "(Bahdanau et al., 2015)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "6"
},
{
"text": "Object Layout Encoding for Image Captioning: Figure 3b shows the CIDEr, and BLEU-4 score history on the validation set during 400k iterations of training of OBJ2TEXT-YOLO, CNN-RNN, and their combination. These results show that OBJ2TEXT-YOLO performs surprisingly close to CNN-RNN, and the model resulting from combining the two, clearly outperforms each method alone. Table 3 shows MS-COCO evaluation results on the test set using their online benchmark service, and confirms results obtained in the validation split, where CNN-RNN seems to have only a slight edge over OBJ2TEXT-YOLO which lacks access to pixel data after the object detection stage. Human evaluation results in Table 2 rows 2, and 3, further confirm these findings. These results show that meaningful descriptions could be generated solely based on object categories and locations information, even without access to color and texture input.",
"cite_spans": [],
"ref_spans": [
{
"start": 45,
"end": 54,
"text": "Figure 3b",
"ref_id": "FIGREF2"
},
{
"start": 369,
"end": 376,
"text": "Table 3",
"ref_id": null
},
{
"start": 680,
"end": 687,
"text": "Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "6"
},
{
"text": "The combined model performs better than the two models, improving the CIDEr score of the basic CNN-RNN model from 0.863 to 0.950, and human evaluation results show that the combined model is preferred over the basic CNN-RNN model for 65.3% of the images for which all evaluators were in agreement about the selected method. These results show that explicitly encoded object counts and location information, which is often overlooked in traditional image captioning approaches, could boost the performance of existing models. Intuitively, object lay- out and visual features are complementary: neural network models for visual feature extraction are trained on a classification task where object-level information such as number of instances and locations are ignored in the objective. Object layouts on the other hand, contain categories and their bounding-boxes but don't have access to rich image features such as image background, object attributes and objects with categories not present in the object detection vocabulary. Figure 5 provides a three-way comparison of captions generated by the three image captioning models, with preferred captions by human evaluators annotated in bold text. Analysis on actual outputs gives us insights into the benefits of combing object layout information and visual features obtained using a CNN. Our OBJ2TEXT-YOLO model makes many mistakes because of lack of image context information since it only has access to object layout, while CNN-RNN makes many mistakes because the visual recognition model is imperfect at predicting the correct content. The combined model is usually able to generate more accurate and comprehensive descriptions.",
"cite_spans": [],
"ref_spans": [
{
"start": 1028,
"end": 1036,
"text": "Figure 5",
"ref_id": "FIGREF5"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "6"
},
{
"text": "In this work we only explored encoding spatial information with object labels, but object la-bels could be readily augmented with rich semantic features that are more detailed descriptions of objects or image regions. For example, the work of You et al. (2016) and Yao et al. (2016b) showed that visual features trained with semantic concepts (text entities mentioned in captions) instead of object labels is useful for image captioning, although they didn't consider encoding semantic concepts with spatial information. In case of object annotations the MS-COCO dataset only provides object labels and bounding-boxes, but there are other datasets such as Flick30K Entities (Plummer et al., 2015) , and the Visual Genome dataset (Krishna et al., 2017 ) that provide richer region-tophrase correspondence annotations. In addition, the fusion of object counts and spatial information with CNN visual features could in principle benefit other vision and language tasks such as visual question answering. We leave these possible extensions as future work.",
"cite_spans": [
{
"start": 243,
"end": 260,
"text": "You et al. (2016)",
"ref_id": "BIBREF39"
},
{
"start": 265,
"end": 283,
"text": "Yao et al. (2016b)",
"ref_id": "BIBREF37"
},
{
"start": 674,
"end": 696,
"text": "(Plummer et al., 2015)",
"ref_id": "BIBREF23"
},
{
"start": 729,
"end": 750,
"text": "(Krishna et al., 2017",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "6"
},
{
"text": "We introduced OBJ2TEXT, a sequence-tosequence model to generate visual descriptions for object layouts where only categories and locations are specified. Our proposed model Alternatives Choice-all Choice-agreement Agreement OBJ2TEXT-GT vs. 54 shows that an orderless visual input representation of concepts is not enough to produce good descriptions, but object extents, locations, and object counts, all contribute to generate more accurate image descriptions. Crucially we show that our encoding mechanism is able to capture useful spatial information using an LSTM network to produce image descriptions, even when the input is provided as a sequence rather than as an explicit 2D representation of objects. Additionally, using our proposed OBJ2TEXT model in combination with an existing image captioning model and a robust object detector we showed improved results in the task of image captioning. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
},
{
"text": "We build on neuraltalk2 and make our Torch code, and an interactive demo of our model available in the following url: http://vision.cs.virginia.edu/obj2text",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This work was supported in part by an NVIDIA Hardware Grant. We are also thankful for the feedback from Mark Yatskar and anonymous reviewers of this paper.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Neural machine translation by jointly learning to align and translate",
"authors": [
{
"first": "Dzmitry",
"middle": [],
"last": "Bahdanau",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2015,
"venue": "International Conference on Learning Representations (ICLR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Ben- gio. 2015. Neural machine translation by jointly learning to align and translate. In International Con- ference on Learning Representations (ICLR).",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Long-term recurrent convolutional networks for visual recognition and description",
"authors": [
{
"first": "Jeffrey",
"middle": [],
"last": "Donahue",
"suffix": ""
},
{
"first": "Lisa",
"middle": [
"Anne"
],
"last": "Hendricks",
"suffix": ""
},
{
"first": "Sergio",
"middle": [],
"last": "Guadarrama",
"suffix": ""
},
{
"first": "Marcus",
"middle": [],
"last": "Rohrbach",
"suffix": ""
},
{
"first": "Subhashini",
"middle": [],
"last": "Venugopalan",
"suffix": ""
},
{
"first": "Kate",
"middle": [],
"last": "Saenko",
"suffix": ""
},
{
"first": "Trevor",
"middle": [
"Darrell"
],
"last": "",
"suffix": ""
}
],
"year": 2015,
"venue": "IEEE conference on computer vision and pattern recognition (CVPR)",
"volume": "",
"issue": "",
"pages": "2625--2634",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jeffrey Donahue, Lisa Anne Hendricks, Sergio Guadar- rama, Marcus Rohrbach, Subhashini Venugopalan, Kate Saenko, and Trevor Darrell. 2015. Long-term recurrent convolutional networks for visual recogni- tion and description. In IEEE conference on com- puter vision and pattern recognition (CVPR), pages 2625-2634.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Image description using visual dependency representations",
"authors": [
{
"first": "Desmond",
"middle": [],
"last": "Elliott",
"suffix": ""
},
{
"first": "Frank",
"middle": [],
"last": "Keller",
"suffix": ""
}
],
"year": 2013,
"venue": "EMNLP",
"volume": "13",
"issue": "",
"pages": "1292--1302",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Desmond Elliott and Frank Keller. 2013. Image de- scription using visual dependency representations. In EMNLP, volume 13, pages 1292-1302.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Who is mistaken? arXiv preprint",
"authors": [
{
"first": "Benjamin",
"middle": [],
"last": "Eysenbach",
"suffix": ""
},
{
"first": "Carl",
"middle": [],
"last": "Vondrick",
"suffix": ""
},
{
"first": "Antonio",
"middle": [],
"last": "Torralba",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1612.01175"
]
},
"num": null,
"urls": [],
"raw_text": "Benjamin Eysenbach, Carl Vondrick, and Antonio Tor- ralba. 2016. Who is mistaken? arXiv preprint arXiv:1612.01175.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "From captions to visual concepts and back",
"authors": [
{
"first": "Saurabh",
"middle": [],
"last": "Hao Fang",
"suffix": ""
},
{
"first": "Forrest",
"middle": [],
"last": "Gupta",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Iandola",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Rupesh",
"suffix": ""
},
{
"first": "Li",
"middle": [],
"last": "Srivastava",
"suffix": ""
},
{
"first": "Piotr",
"middle": [],
"last": "Deng",
"suffix": ""
},
{
"first": "Jianfeng",
"middle": [],
"last": "Doll\u00e1r",
"suffix": ""
},
{
"first": "Xiaodong",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Margaret",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "John",
"middle": [
"C"
],
"last": "Mitchell",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Platt",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the IEEE conference on computer vision and pattern recognition",
"volume": "",
"issue": "",
"pages": "1473--1482",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hao Fang, Saurabh Gupta, Forrest Iandola, Rupesh K Srivastava, Li Deng, Piotr Doll\u00e1r, Jianfeng Gao, Xi- aodong He, Margaret Mitchell, John C Platt, et al. 2015. From captions to visual concepts and back. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 1473-1482.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Every picture tells a story: Generating sentences from images",
"authors": [
{
"first": "Ali",
"middle": [],
"last": "Farhadi",
"suffix": ""
},
{
"first": "Mohsen",
"middle": [],
"last": "Hejrati",
"suffix": ""
},
{
"first": "Mohammad",
"middle": [
"Amin"
],
"last": "Sadeghi",
"suffix": ""
},
{
"first": "Peter",
"middle": [],
"last": "Young",
"suffix": ""
},
{
"first": "Cyrus",
"middle": [],
"last": "Rashtchian",
"suffix": ""
},
{
"first": "Julia",
"middle": [],
"last": "Hockenmaier",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Forsyth",
"suffix": ""
}
],
"year": 2010,
"venue": "European conference on computer vision",
"volume": "",
"issue": "",
"pages": "15--29",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ali Farhadi, Mohsen Hejrati, Mohammad Amin Sadeghi, Peter Young, Cyrus Rashtchian, Julia Hockenmaier, and David Forsyth. 2010. Every pic- ture tells a story: Generating sentences from images. In European conference on computer vision, pages 15-29. Springer.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Predicting object dynamics in scenes",
"authors": [
{
"first": "F",
"middle": [],
"last": "David",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Fouhey",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Lawrence Zitnick",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)",
"volume": "",
"issue": "",
"pages": "2019--2026",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David F Fouhey and C Lawrence Zitnick. 2014. Pre- dicting object dynamics in scenes. In Proceedings of the IEEE Conference on Computer Vision and Pat- tern Recognition (CVPR), pages 2019-2026.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Long short-term memory",
"authors": [
{
"first": "Sepp",
"middle": [],
"last": "Hochreiter",
"suffix": ""
},
{
"first": "J\u00fcrgen",
"middle": [],
"last": "Schmidhuber",
"suffix": ""
}
],
"year": 1997,
"venue": "Neural computation",
"volume": "9",
"issue": "8",
"pages": "1735--1780",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sepp Hochreiter and J\u00fcrgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735-1780.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Natural language object retrieval",
"authors": [
{
"first": "Ronghang",
"middle": [],
"last": "Hu",
"suffix": ""
},
{
"first": "Huazhe",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Marcus",
"middle": [],
"last": "Rohrbach",
"suffix": ""
},
{
"first": "Jiashi",
"middle": [],
"last": "Feng",
"suffix": ""
},
{
"first": "Kate",
"middle": [],
"last": "Saenko",
"suffix": ""
},
{
"first": "Trevor",
"middle": [],
"last": "Darrell",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition",
"volume": "",
"issue": "",
"pages": "4555--4564",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ronghang Hu, Huazhe Xu, Marcus Rohrbach, Jiashi Feng, Kate Saenko, and Trevor Darrell. 2016. Nat- ural language object retrieval. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 4555-4564.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Summarizing source code using a neural attention model",
"authors": [
{
"first": "Srinivasan",
"middle": [],
"last": "Iyer",
"suffix": ""
},
{
"first": "Ioannis",
"middle": [],
"last": "Konstas",
"suffix": ""
},
{
"first": "Alvin",
"middle": [],
"last": "Cheung",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "2073--2083",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Srinivasan Iyer, Ioannis Konstas, Alvin Cheung, and Luke Zettlemoyer. 2016. Summarizing source code using a neural attention model. In Proceedings of the 54th Annual Meeting of the Association for Com- putational Linguistics (Volume 1: Long Papers), pages 2073-2083, Berlin, Germany. Association for Computational Linguistics.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Deep visualsemantic alignments for generating image descriptions",
"authors": [
{
"first": "Andrej",
"middle": [],
"last": "Karpathy",
"suffix": ""
},
{
"first": "Li",
"middle": [],
"last": "Fei-Fei",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition",
"volume": "",
"issue": "",
"pages": "3128--3137",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andrej Karpathy and Li Fei-Fei. 2015. Deep visual- semantic alignments for generating image descrip- tions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 3128-3137.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Unsupervised concept-to-text generation with hypergraphs",
"authors": [
{
"first": "Ioannis",
"middle": [],
"last": "Konstas",
"suffix": ""
},
{
"first": "Mirella",
"middle": [],
"last": "Lapata",
"suffix": ""
}
],
"year": 2012,
"venue": "Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT)",
"volume": "",
"issue": "",
"pages": "752--761",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ioannis Konstas and Mirella Lapata. 2012. Unsuper- vised concept-to-text generation with hypergraphs. In Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT), pages 752- 761.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Visual genome: Connecting language and vision using crowdsourced dense image annotations",
"authors": [
{
"first": "Ranjay",
"middle": [],
"last": "Krishna",
"suffix": ""
},
{
"first": "Yuke",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "Oliver",
"middle": [],
"last": "Groth",
"suffix": ""
},
{
"first": "Justin",
"middle": [],
"last": "Johnson",
"suffix": ""
},
{
"first": "Kenji",
"middle": [],
"last": "Hata",
"suffix": ""
},
{
"first": "Joshua",
"middle": [],
"last": "Kravitz",
"suffix": ""
},
{
"first": "Stephanie",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Yannis",
"middle": [],
"last": "Kalantidis",
"suffix": ""
},
{
"first": "Li-Jia",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "David",
"middle": [
"A"
],
"last": "Shamma",
"suffix": ""
}
],
"year": 2017,
"venue": "International Journal of Computer Vision",
"volume": "123",
"issue": "1",
"pages": "32--73",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ranjay Krishna, Yuke Zhu, Oliver Groth, Justin John- son, Kenji Hata, Joshua Kravitz, Stephanie Chen, Yannis Kalantidis, Li-Jia Li, David A Shamma, et al. 2017. Visual genome: Connecting language and vision using crowdsourced dense image anno- tations. International Journal of Computer Vision, 123(1):32-73.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Imagenet classification with deep convolutional neural networks",
"authors": [
{
"first": "Alex",
"middle": [],
"last": "Krizhevsky",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Geoffrey",
"middle": [
"E"
],
"last": "Hinton",
"suffix": ""
}
],
"year": 2012,
"venue": "Neural Information Processing Systems (NIPS)",
"volume": "",
"issue": "",
"pages": "1097--1105",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hin- ton. 2012. Imagenet classification with deep con- volutional neural networks. In Neural Information Processing Systems (NIPS), pages 1097-1105.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Design of a knowledge-based report generator",
"authors": [
{
"first": "Karen",
"middle": [],
"last": "Kukich",
"suffix": ""
}
],
"year": 1983,
"venue": "Proceedings of the 21st annual meeting on Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "145--150",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Karen Kukich. 1983. Design of a knowledge-based re- port generator. In Proceedings of the 21st annual meeting on Association for Computational Linguis- tics, pages 145-150. Association for Computational Linguistics.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Microsoft coco: Common objects in context",
"authors": [
{
"first": "Tsung-Yi",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Maire",
"suffix": ""
},
{
"first": "Serge",
"middle": [],
"last": "Belongie",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Hays",
"suffix": ""
},
{
"first": "Pietro",
"middle": [],
"last": "Perona",
"suffix": ""
},
{
"first": "Deva",
"middle": [],
"last": "Ramanan",
"suffix": ""
},
{
"first": "Piotr",
"middle": [],
"last": "Doll\u00e1r",
"suffix": ""
},
{
"first": "C Lawrence",
"middle": [],
"last": "Zitnick",
"suffix": ""
}
],
"year": 2014,
"venue": "ECCV",
"volume": "",
"issue": "",
"pages": "740--755",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Doll\u00e1r, and C Lawrence Zitnick. 2014. Microsoft coco: Common objects in context. In ECCV, pages 740- 755. Springer.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Effective approaches to attention-based neural machine translation",
"authors": [
{
"first": "Minh-Thang",
"middle": [],
"last": "Luong",
"suffix": ""
},
{
"first": "Hieu",
"middle": [],
"last": "Pham",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Minh-Thang Luong, Hieu Pham, and Christo- pher D. Manning. 2015. Effective approaches to attention-based neural machine translation. CoRR, abs/1508.04025.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Deep captioning with multimodal recurrent neural networks (m-rnn). ICLR",
"authors": [
{
"first": "Junhua",
"middle": [],
"last": "Mao",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Yi",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Jiang",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Zhiheng",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Alan",
"middle": [],
"last": "Yuille",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Junhua Mao, Wei Xu, Yi Yang, Jiang Wang, Zhiheng Huang, and Alan Yuille. 2015. Deep captioning with multimodal recurrent neural networks (m-rnn). ICLR.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Nonparametric method for data-driven image captioning",
"authors": [
{
"first": "Rebecca",
"middle": [],
"last": "Mason",
"suffix": ""
},
{
"first": "Eugene",
"middle": [],
"last": "Charniak",
"suffix": ""
}
],
"year": 2014,
"venue": "ACL (2)",
"volume": "",
"issue": "",
"pages": "592--598",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rebecca Mason and Eugene Charniak. 2014. Nonpara- metric method for data-driven image captioning. In ACL (2), pages 592-598.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Large scale retrieval and generation of image descriptions",
"authors": [
{
"first": "Vicente",
"middle": [],
"last": "Ordonez",
"suffix": ""
},
{
"first": "Xufeng",
"middle": [],
"last": "Han",
"suffix": ""
},
{
"first": "Polina",
"middle": [],
"last": "Kuznetsova",
"suffix": ""
},
{
"first": "Girish",
"middle": [],
"last": "Kulkarni",
"suffix": ""
},
{
"first": "Margaret",
"middle": [],
"last": "Mitchell",
"suffix": ""
},
{
"first": "Kota",
"middle": [],
"last": "Yamaguchi",
"suffix": ""
},
{
"first": "Karl",
"middle": [],
"last": "Stratos",
"suffix": ""
},
{
"first": "Amit",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "Jesse",
"middle": [],
"last": "Dodge",
"suffix": ""
},
{
"first": "Alyssa",
"middle": [],
"last": "Mensch",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Daume",
"suffix": ""
},
{
"first": "Alexander",
"middle": [
"C"
],
"last": "Hal",
"suffix": ""
},
{
"first": "Yejin",
"middle": [],
"last": "Berg",
"suffix": ""
},
{
"first": "Tamara",
"middle": [
"L"
],
"last": "Choi",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Berg",
"suffix": ""
}
],
"year": 2015,
"venue": "International Journal of Computer Vision",
"volume": "",
"issue": "",
"pages": "1--14",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Vicente Ordonez, Xufeng Han, Polina Kuznetsova, Girish Kulkarni, Margaret Mitchell, Kota Yam- aguchi, Karl Stratos, Amit Goyal, Jesse Dodge, Alyssa Mensch, III Daume, Hal, Alexander C. Berg, Yejin Choi, and Tamara L. Berg. 2015. Large scale retrieval and generation of image descriptions. In- ternational Journal of Computer Vision, pages 1-14.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Im2text: Describing images using 1 million captioned photographs",
"authors": [
{
"first": "Vicente",
"middle": [],
"last": "Ordonez",
"suffix": ""
},
{
"first": "Girish",
"middle": [],
"last": "Kulkarni",
"suffix": ""
},
{
"first": "Tamara",
"middle": [
"L"
],
"last": "Berg",
"suffix": ""
}
],
"year": 2011,
"venue": "Advances in Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "1143--1151",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Vicente Ordonez, Girish Kulkarni, and Tamara L Berg. 2011. Im2text: Describing images using 1 million captioned photographs. In Advances in Neural In- formation Processing Systems, pages 1143-1151.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Bleu: a method for automatic evaluation of machine translation",
"authors": [
{
"first": "Kishore",
"middle": [],
"last": "Papineni",
"suffix": ""
},
{
"first": "Salim",
"middle": [],
"last": "Roukos",
"suffix": ""
},
{
"first": "Todd",
"middle": [],
"last": "Ward",
"suffix": ""
},
{
"first": "Wei-Jing",
"middle": [],
"last": "Zhu",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the 40th annual meeting on association for computational linguistics",
"volume": "",
"issue": "",
"pages": "311--318",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. Bleu: a method for automatic eval- uation of machine translation. In Proceedings of the 40th annual meeting on association for compu- tational linguistics, pages 311-318. Association for Computational Linguistics.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Flickr30k entities: Collecting region-to-phrase correspondences for richer imageto-sentence models",
"authors": [
{
"first": "A",
"middle": [],
"last": "Bryan",
"suffix": ""
},
{
"first": "Liwei",
"middle": [],
"last": "Plummer",
"suffix": ""
},
{
"first": "Chris",
"middle": [
"M"
],
"last": "Wang",
"suffix": ""
},
{
"first": "Juan",
"middle": [
"C"
],
"last": "Cervantes",
"suffix": ""
},
{
"first": "Julia",
"middle": [],
"last": "Caicedo",
"suffix": ""
},
{
"first": "Svetlana",
"middle": [],
"last": "Hockenmaier",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Lazebnik",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the IEEE international conference on computer vision",
"volume": "",
"issue": "",
"pages": "2641--2649",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bryan A Plummer, Liwei Wang, Chris M Cervantes, Juan C Caicedo, Julia Hockenmaier, and Svetlana Lazebnik. 2015. Flickr30k entities: Collecting region-to-phrase correspondences for richer image- to-sentence models. In Proceedings of the IEEE international conference on computer vision, pages 2641-2649.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Combining geometric, textual and visual features for predicting prepositions in image descriptions",
"authors": [
{
"first": "Arnau",
"middle": [],
"last": "Ramisa",
"suffix": ""
},
{
"first": "Ying",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Emmanuel",
"middle": [],
"last": "Lu",
"suffix": ""
},
{
"first": "Francesc",
"middle": [],
"last": "Dellandrea",
"suffix": ""
},
{
"first": "Robert",
"middle": [],
"last": "Moreno-Noguer",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Gaizauskas",
"suffix": ""
}
],
"year": 2015,
"venue": "Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "214--220",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Arnau Ramisa, JK Wang, Ying Lu, Emmanuel Del- landrea, Francesc Moreno-Noguer, and Robert Gaizauskas. 2015. Combining geometric, textual and visual features for predicting prepositions in im- age descriptions. In Conference on Empirical Meth- ods in Natural Language Processing (EMNLP), pages 214-220. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "YOLO9000: better, faster, stronger",
"authors": [
{
"first": "Joseph",
"middle": [],
"last": "Redmon",
"suffix": ""
},
{
"first": "Ali",
"middle": [],
"last": "Farhadi",
"suffix": ""
}
],
"year": 2017,
"venue": "Computer Vision and Pattern Recognition (CVPR)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joseph Redmon and Ali Farhadi. 2017. YOLO9000: better, faster, stronger. In Computer Vision and Pat- tern Recognition (CVPR).",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Grounding of textual phrases in images by reconstruction",
"authors": [
{
"first": "Anna",
"middle": [],
"last": "Rohrbach",
"suffix": ""
},
{
"first": "Marcus",
"middle": [],
"last": "Rohrbach",
"suffix": ""
},
{
"first": "Ronghang",
"middle": [],
"last": "Hu",
"suffix": ""
},
{
"first": "Trevor",
"middle": [],
"last": "Darrell",
"suffix": ""
},
{
"first": "Bernt",
"middle": [],
"last": "Schiele",
"suffix": ""
}
],
"year": 2016,
"venue": "European Conference on Computer Vision",
"volume": "",
"issue": "",
"pages": "817--834",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Anna Rohrbach, Marcus Rohrbach, Ronghang Hu, Trevor Darrell, and Bernt Schiele. 2016. Ground- ing of textual phrases in images by reconstruction. In European Conference on Computer Vision, pages 817-834. Springer.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Imagenet large scale visual recognition challenge",
"authors": [
{
"first": "Olga",
"middle": [],
"last": "Russakovsky",
"suffix": ""
},
{
"first": "Jia",
"middle": [],
"last": "Deng",
"suffix": ""
},
{
"first": "Hao",
"middle": [],
"last": "Su",
"suffix": ""
},
{
"first": "Jonathan",
"middle": [],
"last": "Krause",
"suffix": ""
},
{
"first": "Sanjeev",
"middle": [],
"last": "Satheesh",
"suffix": ""
},
{
"first": "Sean",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "Zhiheng",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Andrej",
"middle": [],
"last": "Karpathy",
"suffix": ""
},
{
"first": "Aditya",
"middle": [],
"last": "Khosla",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Bernstein",
"suffix": ""
}
],
"year": 2015,
"venue": "International Journal of Computer Vision",
"volume": "115",
"issue": "3",
"pages": "211--252",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, et al. 2015. Imagenet large scale visual recognition chal- lenge. International Journal of Computer Vision, 115(3):211-252.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Word ordering without syntax",
"authors": [
{
"first": "Allen",
"middle": [],
"last": "Schmaltz",
"suffix": ""
},
{
"first": "Alexander",
"middle": [
"M"
],
"last": "Rush",
"suffix": ""
},
{
"first": "Stuart",
"middle": [],
"last": "Shieber",
"suffix": ""
}
],
"year": 2016,
"venue": "Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "2319--2324",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Allen Schmaltz, Alexander M. Rush, and Stuart Shieber. 2016. Word ordering without syntax. In Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 2319-2324, Austin, Texas.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Very deep convolutional networks for large-scale image recognition. International Conference on Learning Representations (ICLR",
"authors": [
{
"first": "Karen",
"middle": [],
"last": "Simonyan",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Zisserman",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Karen Simonyan and Andrew Zisserman. 2015. Very deep convolutional networks for large-scale image recognition. International Conference on Learning Representations (ICLR).",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Automatically extracting and representing collocations for language generation",
"authors": [
{
"first": "A",
"middle": [],
"last": "Frank",
"suffix": ""
},
{
"first": "Kathleen",
"middle": [
"R"
],
"last": "Smadja",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mckeown",
"suffix": ""
}
],
"year": 1990,
"venue": "Annual meeting of the Association for Computational Linguistics (ACL)",
"volume": "",
"issue": "",
"pages": "252--259",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Frank A Smadja and Kathleen R McKeown. 1990. Au- tomatically extracting and representing collocations for language generation. In Annual meeting of the Association for Computational Linguistics (ACL), pages 252-259.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Cider: Consensus-based image description evaluation",
"authors": [
{
"first": "Ramakrishna",
"middle": [],
"last": "Vedantam",
"suffix": ""
},
{
"first": "Lawrence",
"middle": [],
"last": "Zitnick",
"suffix": ""
},
{
"first": "Devi",
"middle": [],
"last": "Parikh",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition",
"volume": "",
"issue": "",
"pages": "4566--4575",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ramakrishna Vedantam, C Lawrence Zitnick, and Devi Parikh. 2015a. Cider: Consensus-based image de- scription evaluation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recog- nition, pages 4566-4575.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Learning common sense through visual abstraction",
"authors": [
{
"first": "Ramakrishna",
"middle": [],
"last": "Vedantam",
"suffix": ""
},
{
"first": "Xiao",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Tanmay",
"middle": [],
"last": "Batra",
"suffix": ""
},
{
"first": "Lawrence",
"middle": [],
"last": "Zitnick",
"suffix": ""
},
{
"first": "Devi",
"middle": [],
"last": "Parikh",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the IEEE International Conference on Computer Vision (ICCV)",
"volume": "",
"issue": "",
"pages": "2542--2550",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ramakrishna Vedantam, Xiao Lin, Tanmay Batra, C Lawrence Zitnick, and Devi Parikh. 2015b. Learning common sense through visual abstraction. In Proceedings of the IEEE International Confer- ence on Computer Vision (ICCV), pages 2542-2550.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Show and tell: A neural image caption generator",
"authors": [
{
"first": "Oriol",
"middle": [],
"last": "Vinyals",
"suffix": ""
},
{
"first": "Alexander",
"middle": [],
"last": "Toshev",
"suffix": ""
},
{
"first": "Samy",
"middle": [],
"last": "Bengio",
"suffix": ""
},
{
"first": "Dumitru",
"middle": [],
"last": "Erhan",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the IEEE conference on computer vision and pattern recognition",
"volume": "",
"issue": "",
"pages": "3156--3164",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Oriol Vinyals, Alexander Toshev, Samy Bengio, and Dumitru Erhan. 2015. Show and tell: A neural im- age caption generator. In Proceedings of the IEEE conference on computer vision and pattern recogni- tion, pages 3156-3164.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Semantically conditioned lstm-based natural language generation for spoken dialogue systems",
"authors": [
{
"first": "Milica",
"middle": [],
"last": "Tsung-Hsien Wen",
"suffix": ""
},
{
"first": "Nikola",
"middle": [],
"last": "Gasic",
"suffix": ""
},
{
"first": "Pei-Hao",
"middle": [],
"last": "Mrk\u0161i\u0107",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Su",
"suffix": ""
},
{
"first": "Steve",
"middle": [],
"last": "Vandyke",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Young",
"suffix": ""
}
],
"year": 2015,
"venue": "Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "1711--1721",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tsung-Hsien Wen, Milica Gasic, Nikola Mrk\u0161i\u0107, Pei- Hao Su, David Vandyke, and Steve Young. 2015. Semantically conditioned lstm-based natural lan- guage generation for spoken dialogue systems. In Conference on Empirical Methods in Natural Lan- guage Processing (EMNLP), pages 1711-1721, Lis- bon, Portugal.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Show, attend and tell: Neural image caption generation with visual attention",
"authors": [
{
"first": "Kelvin",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Jimmy",
"middle": [],
"last": "Ba",
"suffix": ""
},
{
"first": "Ryan",
"middle": [],
"last": "Kiros",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Aaron",
"middle": [],
"last": "Courville",
"suffix": ""
},
{
"first": "Ruslan",
"middle": [],
"last": "Salakhudinov",
"suffix": ""
},
{
"first": "Rich",
"middle": [],
"last": "Zemel",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2015,
"venue": "International Conference on Machine Learning",
"volume": "",
"issue": "",
"pages": "2048--2057",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron Courville, Ruslan Salakhudinov, Rich Zemel, and Yoshua Bengio. 2015. Show, attend and tell: Neural image caption generation with visual at- tention. In International Conference on Machine Learning, pages 2048-2057.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Oracle performance for visual captioning",
"authors": [
{
"first": "Li",
"middle": [],
"last": "Yao",
"suffix": ""
},
{
"first": "Nicolas",
"middle": [],
"last": "Ballas",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "John",
"middle": [
"R"
],
"last": "Smith",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2016,
"venue": "British Machine Vision Conference (BMVC)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Li Yao, Nicolas Ballas, Kyunghyun Cho, John R. Smith, and Yoshua Bengio. 2016a. Oracle perfor- mance for visual captioning. In British Machine Vi- sion Conference (BMVC).",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Boosting image captioning with attributes",
"authors": [
{
"first": "Ting",
"middle": [],
"last": "Yao",
"suffix": ""
},
{
"first": "Yingwei",
"middle": [],
"last": "Pan",
"suffix": ""
},
{
"first": "Yehao",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Zhaofan",
"middle": [],
"last": "Qiu",
"suffix": ""
},
{
"first": "Tao",
"middle": [],
"last": "Mei",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1611.01646"
]
},
"num": null,
"urls": [],
"raw_text": "Ting Yao, Yingwei Pan, Yehao Li, Zhaofan Qiu, and Tao Mei. 2016b. Boosting image captioning with attributes. arXiv preprint arXiv:1611.01646.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "Stating the obvious: Extracting visual common sense knowledge",
"authors": [
{
"first": "Mark",
"middle": [],
"last": "Yatskar",
"suffix": ""
},
{
"first": "Vicente",
"middle": [],
"last": "Ordonez",
"suffix": ""
},
{
"first": "Ali",
"middle": [],
"last": "Farhadi",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "193--198",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mark Yatskar, Vicente Ordonez, and Ali Farhadi. 2016. Stating the obvious: Extracting visual common sense knowledge. In Proceedings of the 2016 Con- ference of the North American Chapter of the As- sociation for Computational Linguistics: Human Language Technologies, pages 193-198, San Diego, California. Association for Computational Linguis- tics.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "Image captioning with semantic attention",
"authors": [
{
"first": "Quanzeng",
"middle": [],
"last": "You",
"suffix": ""
},
{
"first": "Hailin",
"middle": [],
"last": "Jin",
"suffix": ""
},
{
"first": "Zhaowen",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Chen",
"middle": [],
"last": "Fang",
"suffix": ""
},
{
"first": "Jiebo",
"middle": [],
"last": "Luo",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition",
"volume": "",
"issue": "",
"pages": "4651--4659",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Quanzeng You, Hailin Jin, Zhaowen Wang, Chen Fang, and Jiebo Luo. 2016. Image captioning with seman- tic attention. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 4651-4659.",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "Bringing semantics into focus using visual abstraction",
"authors": [
{
"first": "Lawrence",
"middle": [],
"last": "Zitnick",
"suffix": ""
},
{
"first": "Devi",
"middle": [],
"last": "Parikh",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)",
"volume": "",
"issue": "",
"pages": "3009--3016",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "C Lawrence Zitnick and Devi Parikh. 2013. Bring- ing semantics into focus using visual abstraction. In Proceedings of the IEEE Conference on Com- puter Vision and Pattern Recognition (CVPR), pages 3009-3016.",
"links": null
},
"BIBREF41": {
"ref_id": "b41",
"title": "Learning the visual interpretation of sentences",
"authors": [
{
"first": "Devi",
"middle": [],
"last": "C Lawrence Zitnick",
"suffix": ""
},
{
"first": "Lucy",
"middle": [],
"last": "Parikh",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Vanderwende",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the IEEE International Conference on Computer Vision",
"volume": "",
"issue": "",
"pages": "1681--1688",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "C Lawrence Zitnick, Devi Parikh, and Lucy Vander- wende. 2013. Learning the visual interpretation of sentences. In Proceedings of the IEEE International Conference on Computer Vision, pages 1681-1688.",
"links": null
}
},
"ref_entries": {
"FIGREF1": {
"num": null,
"text": "(no obj-locations, no obj-counts) OBJ2TEXT-GT (no obj-locations) OBJ2TEXT-GT (a) Score histories of lesioned versions of the proposed model for the task of object layout captioning. Score histories of image captioning models. Performance boosts of CNN-RNN and combined model around iteration 100K and 250K are due to fine-tuning of the image CNN model.",
"uris": null,
"type_str": "figure"
},
"FIGREF2": {
"num": null,
"text": "Score histories of various models on the MS COCO split validation set.MethodBleu 4 CIDEr METEOR ROUGE-L OBJ2TEXT-GT (no obj-locations, counts)",
"uris": null,
"type_str": "figure"
},
"FIGREF3": {
"num": null,
"text": "3: The 5-Refs and 40-Refs performances of OBJ2TEXT-YOLO, CNN-RNN and the combined approach on the MS COCO online test set. The 5-Refs performance is measured using 5 ground-truth reference captions, while 40-Refs performance is measured using 40 ground-truth reference captions. a: three buses parked in a parking lot b: a bus is parked in front of a bus stop a: two people riding on the back of an elephant b: a man and a woman riding on the back of an elephant a: a man is riding a horse and a dog is carrying a bag b: two dogs are sitting on the back of a group of people standing around a parking meter b: a man riding a motorcycle down a street a: a woman sitting on a couch with a man holding a doughnut b: a woman and a child sitting at a table with food a: two young girls holding tennis racquets on a court b: a man holding a tennis racquet on a",
"uris": null,
"type_str": "figure"
},
"FIGREF4": {
"num": null,
"text": "Qualitative examples comparing generated captions of (a) OBJ2TEXT-GT, and (b) OBJ2TEXT-GT (no obj-locations).",
"uris": null,
"type_str": "figure"
},
"FIGREF5": {
"num": null,
"text": "Qualitative examples comparing the generated captions of (a) OBJ2TEXT-YOLO, (b) CNN-RNN and (c) OBJ2TEXT-YOLO + CNN-RNN. Images are selected from the 500 human evaluation images and annotated with YOLO object detection results. Captions preferred by human evaluators with agreement are highlighted in bold text.",
"uris": null,
"type_str": "figure"
},
"TABREF1": {
"text": "Performance of lesioned versions of the proposed model on the MS COCO split test set.",
"html": null,
"content": "<table/>",
"num": null,
"type_str": "table"
},
"TABREF3": {
"text": "Human evaluation results using two-alternative forced choice evaluation. Choice-all is percentage the first alternative was chosen. Choice-agreement is percentage the first alternative was chosen only when all annotators agreed. Agreement is percentage where all annotators agreed (random is 25%).",
"html": null,
"content": "<table><tr><td colspan=\"5\">MS COCO Test Set Performance CIDEr ROUGE-L METEOR B-4</td><td>B-3</td><td>B-2</td><td>B-1</td></tr><tr><td>5-Refs</td><td/><td/><td/><td/></tr><tr><td>OBJ2TEXT-YOLO</td><td>0.830</td><td>0.497</td><td>0.228</td><td colspan=\"2\">0.262 0.361 0.500 0.681</td></tr><tr><td>CNN-RNN</td><td>0.857</td><td>0.514</td><td>0.237</td><td colspan=\"2\">0.283 0.387 0.529 0.705</td></tr><tr><td colspan=\"2\">OBJ2TEXT-YOLO + CNN-RNN 0.932</td><td>0.528</td><td>0.250</td><td colspan=\"2\">0.300 0.404 0.546 0.719</td></tr><tr><td>40-Refs</td><td/><td/><td/><td/></tr><tr><td>OBJ2TEXT-YOLO</td><td>0.853</td><td>0.636</td><td>0.305</td><td colspan=\"2\">0.508 0.624 0.746 0.858</td></tr><tr><td>CNN-RNN</td><td>0.863</td><td>0.654</td><td>0.318</td><td colspan=\"2\">0.540 0.656 0.775 0.877</td></tr><tr><td colspan=\"2\">OBJ2TEXT-YOLO + CNN-RNN 0.950</td><td>0.671</td><td>0.334</td><td colspan=\"2\">0.569 0.686 0.802 0.896</td></tr></table>",
"num": null,
"type_str": "table"
},
"TABREF4": {
"text": "",
"html": null,
"content": "<table/>",
"num": null,
"type_str": "table"
}
}
}
}