ACL-OCL / Base_JSON /prefixD /json /D16 /D16-1044.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "D16-1044",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T16:35:59.264633Z"
},
"title": "Multimodal Compact Bilinear Pooling for Visual Question Answering and Visual Grounding",
"authors": [
{
"first": "Akira",
"middle": [],
"last": "Fukui",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "UC Berkeley EECS",
"location": {
"region": "CA",
"country": "United States"
}
},
"email": ""
},
{
"first": "Dong",
"middle": [
"Huk"
],
"last": "Park",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "UC Berkeley EECS",
"location": {
"region": "CA",
"country": "United States"
}
},
"email": ""
},
{
"first": "Daylen",
"middle": [],
"last": "Yang",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "UC Berkeley EECS",
"location": {
"region": "CA",
"country": "United States"
}
},
"email": ""
},
{
"first": "Anna",
"middle": [],
"last": "Rohrbach",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "UC Berkeley EECS",
"location": {
"region": "CA",
"country": "United States"
}
},
"email": ""
},
{
"first": "Trevor",
"middle": [],
"last": "Darrell",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "UC Berkeley EECS",
"location": {
"region": "CA",
"country": "United States"
}
},
"email": ""
},
{
"first": "Marcus",
"middle": [],
"last": "Rohrbach",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "UC Berkeley EECS",
"location": {
"region": "CA",
"country": "United States"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Modeling textual or visual information with vector representations trained from large language or visual datasets has been successfully explored in recent years. However, tasks such as visual question answering require combining these vector representations with each other. Approaches to multimodal pooling include element-wise product or sum, as well as concatenation of the visual and textual representations. We hypothesize that these methods are not as expressive as an outer product of the visual and textual vectors. As the outer product is typically infeasible due to its high dimensionality, we instead propose utilizing Multimodal Compact Bilinear pooling (MCB) to efficiently and expressively combine multimodal features. We extensively evaluate MCB on the visual question answering and grounding tasks. We consistently show the benefit of MCB over ablations without MCB. For visual question answering, we present an architecture which uses MCB twice, once for predicting attention over spatial features and again to combine the attended representation with the question representation. This model outperforms the state-of-the-art on the Visual7W dataset and the VQA challenge.",
"pdf_parse": {
"paper_id": "D16-1044",
"_pdf_hash": "",
"abstract": [
{
"text": "Modeling textual or visual information with vector representations trained from large language or visual datasets has been successfully explored in recent years. However, tasks such as visual question answering require combining these vector representations with each other. Approaches to multimodal pooling include element-wise product or sum, as well as concatenation of the visual and textual representations. We hypothesize that these methods are not as expressive as an outer product of the visual and textual vectors. As the outer product is typically infeasible due to its high dimensionality, we instead propose utilizing Multimodal Compact Bilinear pooling (MCB) to efficiently and expressively combine multimodal features. We extensively evaluate MCB on the visual question answering and grounding tasks. We consistently show the benefit of MCB over ablations without MCB. For visual question answering, we present an architecture which uses MCB twice, once for predicting attention over spatial features and again to combine the attended representation with the question representation. This model outperforms the state-of-the-art on the Visual7W dataset and the VQA challenge.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Representation learning for text and images has been extensively studied in recent years. Recurrent neural networks (RNNs) are often used to represent sentences or phrases (Sutskever et al., 2014; 2015), and convolutional neural networks (CNNs) have shown to work best to represent images (Donahue et al., 2013; . For tasks such as visual question answering (VQA) and visual grounding, most approaches require joining the representation of both modalities. For combining the two vector representations (multimodal pooling), current approaches in VQA or grounding rely on concatenating vectors or applying element-wise sum or product. While this generates a joint representation, it might not be expressive enough to fully capture the complex associations between the two different modalities.",
"cite_spans": [
{
"start": 172,
"end": 196,
"text": "(Sutskever et al., 2014;",
"ref_id": "BIBREF34"
},
{
"start": 289,
"end": 311,
"text": "(Donahue et al., 2013;",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we propose to rely on Multimodal Compact Bilinear pooling (MCB) to get a joint representation. Bilinear pooling computes the outer product between two vectors, which allows, in contrast to element-wise product, a multiplicative interaction between all elements of both vectors. Bilinear pooling models (Tenenbaum and Freeman, 2000) have recently been shown to be beneficial for fine-grained classification for vision only tasks (Lin et al., 2015) . However, given their high dimensionality (n 2 ), bilinear pooling has so far not been widely used. In this paper, we adopt the idea from which shows how to efficiently compress bilinear pooling for a single modality. In this work, we discuss and extensively evaluate the extension to the multimodal case for text and visual modalities. As shown in Figure 1 , Multimodal Compact Bilinear pooling (MCB) is approximated by randomly projecting the image and text representations to a higher dimensional space (using Count Sketch (Charikar et al., 2002) ) and then convolving both vectors efficiently by using element-wise product in Fast Fourier Transform (FFT) space. We use MCB to predict answers for the VQA task and locations for the visual grounding task. For open-ended question answering, we present an architecture for VQA which uses MCB twice, once to predict spatial attention and the second time to predict the answer. For multiple-choice question answering we introduce a third MCB to relate the encoded answer to the question-image space. Additionally, we discuss the benefit of attention maps and additional training data for the VQA task. To summarize, MCB is evaluated on two tasks, four datasets, and with a diverse set of ablations and comparisons to the state-of-the-art.",
"cite_spans": [
{
"start": 317,
"end": 346,
"text": "(Tenenbaum and Freeman, 2000)",
"ref_id": "BIBREF35"
},
{
"start": 443,
"end": 461,
"text": "(Lin et al., 2015)",
"ref_id": "BIBREF24"
},
{
"start": 989,
"end": 1012,
"text": "(Charikar et al., 2002)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [
{
"start": 812,
"end": 820,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Multimodal pooling. Current approaches to multimodal pooling involve element-wise operations or vector concatenation. In the visual question answering domain, a number of models have been proposed. Simpler models such as iBOWIMG baseline (Zhou et al., 2015) use concatenation and fully connected layers to combine the image and question modalities. Stacked Attention Networks and Spatial Memory Networks use LSTMs or extract soft-attention on the image features, but ultimately use element-wise product or element-wise sum to merge modalities. D-NMN (Andreas et al., 2016a) introduced REINFORCE to dynamically create a network and use element-wise product to join attentions and element-wise sum predict answers. Dynamic Memory Networks (DMN) (Xiong et al., 2016) pool the image and question with element-wise product and sum, attending to part of the image and question with an Episodic Memory Module (Kumar et al., 2016) . DPPnet (Noh et al., 2015) creates a Parameter Prediction Network which learns to predict the parameters of the second to last visual recognition layer dynamically from the question. Similar to this work, DPPnet allows multiplicative interactions between the visual and question encodings. Lu et al. (2016) recently proposed a model that extracts multiple co-attentions on the image and question and combines the co-attentions in a hierarchical manner using element-wise sum, concatenation, and fully connected layers. For the visual grounding task, Rohrbach et al. (2016) propose an approach where the language phrase embedding is concatenated with the visual features in order to predict the attention weights over multiple bounding box proposals. Similarly, Hu et al. (2016a) concatenate phrase embeddings with visual features at different spatial locations to obtain a segmentation.",
"cite_spans": [
{
"start": 238,
"end": 257,
"text": "(Zhou et al., 2015)",
"ref_id": "BIBREF42"
},
{
"start": 743,
"end": 763,
"text": "(Xiong et al., 2016)",
"ref_id": null
},
{
"start": 902,
"end": 922,
"text": "(Kumar et al., 2016)",
"ref_id": "BIBREF22"
},
{
"start": 932,
"end": 950,
"text": "(Noh et al., 2015)",
"ref_id": "BIBREF28"
},
{
"start": 1214,
"end": 1230,
"text": "Lu et al. (2016)",
"ref_id": "BIBREF25"
},
{
"start": 1474,
"end": 1496,
"text": "Rohrbach et al. (2016)",
"ref_id": "BIBREF26"
},
{
"start": 1685,
"end": 1702,
"text": "Hu et al. (2016a)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Bilinear pooling. Bilinear pooling has been applied to the fine-grained visual recognition task. Lin et al. (2015) use two CNNs to extract features from an image and combine the resulting vectors using an outer product, which is fully connected to an output layer. address the space and time complexity of bilinear features by viewing the bilinear transformation as a polynomial kernel. Pham and Pagh (2013) describe a method to approximate the polynomial kernel using Count Sketches and convolutions.",
"cite_spans": [
{
"start": 97,
"end": 114,
"text": "Lin et al. (2015)",
"ref_id": "BIBREF24"
},
{
"start": 387,
"end": 407,
"text": "Pham and Pagh (2013)",
"ref_id": "BIBREF30"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Joint multimodal embeddings. In order to model similarities between two modalities, many prior works have learned joint multimodal spaces, or embeddings. Some of such embeddings are based on Canonical Correlation Analysis (Hardoon et al., 2004) e.g. (Gong et al., 2014; Klein et al., 2015; Plummer et al., 2015) , linear models with ranking loss (Frome et al., 2013; Karpathy and Fei-Fei, 2015; Weston et al., 2011) or non-linear deep learning models (Kiros et al., 2014; Mao et al., 2015; Ngiam et al., 2011) . Our multimodal compact bilinear pooling can be seen as a complementary operation that allows us to capture different interactions between two modalities more expressively than e.g. concatenation. Consequently, many embedding learning approaches could benefit from incorporating such interactions. ",
"cite_spans": [
{
"start": 250,
"end": 269,
"text": "(Gong et al., 2014;",
"ref_id": "BIBREF9"
},
{
"start": 270,
"end": 289,
"text": "Klein et al., 2015;",
"ref_id": "BIBREF20"
},
{
"start": 290,
"end": 311,
"text": "Plummer et al., 2015)",
"ref_id": null
},
{
"start": 346,
"end": 366,
"text": "(Frome et al., 2013;",
"ref_id": "BIBREF6"
},
{
"start": 367,
"end": 394,
"text": "Karpathy and Fei-Fei, 2015;",
"ref_id": null
},
{
"start": 395,
"end": 415,
"text": "Weston et al., 2011)",
"ref_id": "BIBREF38"
},
{
"start": 451,
"end": 471,
"text": "(Kiros et al., 2014;",
"ref_id": "BIBREF19"
},
{
"start": 472,
"end": 489,
"text": "Mao et al., 2015;",
"ref_id": "BIBREF26"
},
{
"start": 490,
"end": 509,
"text": "Ngiam et al., 2011)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "0 -x n1 ... 0 -x 1 0 0 x 2 -q 2 0 ... q 4 0 0 q n2 q 9 q 1 q 2 ... q n2 x 1 x 2 ... x",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "a = argmax a\u2208A p(a|x, q; \u03b8)",
"eq_num": "(1)"
}
],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "with parameters \u03b8 and the set of answers or locations A. For an image embedding x = \u039e(x) (i.e. a CNN) and question embedding q = \u2126(q) (i.e. an LSTM), we are interested in getting a good joint representation by pooling both representations. With a multimodal pooling \u03a6(x, q) that encodes the relationship between x and q well, it becomes easier to learn a classifier for Equation (1). In this section, we first discuss our multimodal pooling \u03a6 for combining representations from different modalities into a single representation (Sec. 3.1) and then detail our architectures for VQA (Sec. 3.2) and visual grounding (Sec. 3.3), further explaining how we predict\u00e2 with the given image representation \u039e and text representation \u2126.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Bilinear models (Tenenbaum and Freeman, 2000) take the outer product of two vectors x \u2208 R n 1 and q \u2208 R n 2 and learn a model W (here linear), i.e. z = W [x \u2297 q], where \u2297 denotes the outer product (xq T ) and [ ] denotes linearizing the matrix in a vector. As discussed in the introduction, bilinear pooling is interesting because it allows all elements of both vectors to interact with each other in a multiplicative",
"cite_spans": [
{
"start": 16,
"end": 45,
"text": "(Tenenbaum and Freeman, 2000)",
"ref_id": "BIBREF35"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Multimodal Compact Bilinear Pooling (MCB)",
"sec_num": "3.1"
},
{
"text": "Algorithm 1 Multimodal Compact Bilinear 1: input: v 1 \u2208 R n1 , v 2 \u2208 R n2 2: output: \u03a6(v 1 , v 2 ) \u2208 R d 3: procedure MCB(v 1 , v 2 , n 1 , n 2 , d) 4: for k \u2190 1 . . . 2 do 5:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multimodal Compact Bilinear Pooling (MCB)",
"sec_num": "3.1"
},
{
"text": "if h k , s k not initialized then 6:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multimodal Compact Bilinear Pooling (MCB)",
"sec_num": "3.1"
},
{
"text": "for i \u2190 1 . . . n k do 7: sample h k [i] from {1, . . . , d} 8: sample s k [i] from {\u22121, 1} 9: v k = \u03a8(v k , h k , s k , n k )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multimodal Compact Bilinear Pooling (MCB)",
"sec_num": "3.1"
},
{
"text": "10:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multimodal Compact Bilinear Pooling (MCB)",
"sec_num": "3.1"
},
{
"text": "\u03a6 = FFT \u22121 (FFT(v 1 ) FFT(v 2 ))",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multimodal Compact Bilinear Pooling (MCB)",
"sec_num": "3.1"
},
{
"text": "11:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multimodal Compact Bilinear Pooling (MCB)",
"sec_num": "3.1"
},
{
"text": "return \u03a6 12: procedure \u03a8(v, h, s, n) 13: y = [0, . . . , 0]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multimodal Compact Bilinear Pooling (MCB)",
"sec_num": "3.1"
},
{
"text": "14:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multimodal Compact Bilinear Pooling (MCB)",
"sec_num": "3.1"
},
{
"text": "for i \u2190 1 . . . n do 15: y[h[i]] = y[h[i]] + s[i] \u2022 v[i] 16:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multimodal Compact Bilinear Pooling (MCB)",
"sec_num": "3.1"
},
{
"text": "return y way. However, the high dimensional representation (i.e. when n 1 and n 2 are large) leads to an infeasible number of parameters to learn in W . For example, we use n 1 = n 2 = 2048 and z \u2208 R 3000 for VQA. W thus would have 12.5 billion parameters, which leads to very high memory consumption and high computation times.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multimodal Compact Bilinear Pooling (MCB)",
"sec_num": "3.1"
},
{
"text": "We thus need a method that projects the outer product to a lower dimensional space and also avoids computing the outer product directly. As suggested by for a single modality, we rely on the Count Sketch projection function \u03a8 (Charikar et al., 2002) , which projects a vector v \u2208 R n to y \u2208 R d . We initialize two vectors s \u2208 {\u22121, 1} n and h \u2208 {1, ..., d} n : s contains either 1 or \u22121 for each index, and h maps each index i in the input v to an index j in the output y. Both s and h are initialized randomly from a uniform distribution and remain constant for future invocations of count sketch. y is initialized as a zero vector. For every element v[i] its destination index j = h[i] is looked up using h, and",
"cite_spans": [
{
"start": 226,
"end": 249,
"text": "(Charikar et al., 2002)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Multimodal Compact Bilinear Pooling (MCB)",
"sec_num": "3.1"
},
{
"text": "s[i] \u2022 v[i] is added to y[j]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multimodal Compact Bilinear Pooling (MCB)",
"sec_num": "3.1"
},
{
"text": ". See lines 1-9 and 12-16 in Algorithm 1. This allows us to project the outer product to a lower dimensional space, which reduces the number of parameters in W . To avoid computing the outer product explicitly, Pham and Pagh (2013) showed that the count sketch of the outer product of two vectors can be expressed as convolution of both count sketches: where * is the convolution operator. Additionally, the convolution theorem states that convolution in the time domain is equivalent to element-wise product in the frequency domain. The convolution x * q can be rewritten as FFT \u22121 (FFT(x ) FFT(q )), where refers to element-wise product. These ideas are summarized in Figure 2 and formalized in Algorithm 1, which is based on the Tensor Sketch algorithm of Pham and Pagh (2013) . We invoke the algorithm with v 1 = x and v 2 = q. We note that this easily extends and remains efficient for more than two multi-modal inputs as the combination happens as element-wise product.",
"cite_spans": [
{
"start": 211,
"end": 231,
"text": "Pham and Pagh (2013)",
"ref_id": "BIBREF30"
},
{
"start": 759,
"end": 779,
"text": "Pham and Pagh (2013)",
"ref_id": "BIBREF30"
}
],
"ref_spans": [
{
"start": 670,
"end": 678,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Multimodal Compact Bilinear Pooling (MCB)",
"sec_num": "3.1"
},
{
"text": "\u03a8(x \u2297 q, h, s) = \u03a8(x, h, s) * \u03a8(q, h, s),",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multimodal Compact Bilinear Pooling (MCB)",
"sec_num": "3.1"
},
{
"text": "In VQA, the input to the model is an image and a question, and the goal is to answer the question. Our model extracts representations for the image and the question, pools the vectors using MCB, and arrives at the answer by treating the problem as a multi-class classification problem with 3,000 possible classes.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Architectures for VQA",
"sec_num": "3.2"
},
{
"text": "We extract image features using a 152-layer Residual Network ) that is pretrained on ImageNet data (Deng et al., 2009) . Images are resized to 448 \u00d7 448, and we use the output of the layer (\"pool5\") before the 1000-way classifier. We then perform L 2 normalization on the 2048-D vector.",
"cite_spans": [
{
"start": 99,
"end": 118,
"text": "(Deng et al., 2009)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Architectures for VQA",
"sec_num": "3.2"
},
{
"text": "Input questions are first tokenized into words, and the words are one-hot encoded and passed through a learned embedding layer. The tanh nonlinearity is used after the embedding. The embedding layer is followed by a 2-layer LSTM with 1024 units in each layer. The outputs of each LSTM layer are concatenated to form a 2048-D vector.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Architectures for VQA",
"sec_num": "3.2"
},
{
"text": "The two vectors are then passed through MCB. The MCB is followed by an element-wise signed square-root and L 2 normalization. After MCB pooling, a fully connected layer connects the resulting 16,000-D multimodal representation to the 3,000 top answers.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Architectures for VQA",
"sec_num": "3.2"
},
{
"text": "Attention. To incorporate spatial information, we use soft attention on our MCB pooling method. Explored by for image captioning and by (Xu and Saenko, 2016) and for VQA, the soft attention mechanism can be easily integrated in our model.",
"cite_spans": [
{
"start": 136,
"end": 157,
"text": "(Xu and Saenko, 2016)",
"ref_id": "BIBREF40"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Architectures for VQA",
"sec_num": "3.2"
},
{
"text": "For each spatial grid location in the visual representation (i.e. last convolutional layer of ResNet [res5c], last convolutional layer of VGG [conv5]), we use MCB pooling to merge the slice of the visual feature with the language representation. As depicted in Figure 3 , after the pooling we use two convolutional layers to predict the attention weight for each grid location. We apply softmax to produce a normalized soft attention map. We then take a weighted sum of the spatial vectors using the attention map to create the attended visual representation. We also experiment with generating multiple attention maps to allow the model to make multiple \"glimpses\" which are concatenated before being merged with the language representation through another MCB pooling for prediction. Predicting attention maps with MCB pooling allows the model to effectively learn how to attend to salient locations based on both the visual and language representations.",
"cite_spans": [],
"ref_spans": [
{
"start": 261,
"end": 269,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Architectures for VQA",
"sec_num": "3.2"
},
{
"text": "Answer Encoding. For VQA with multiple choices, we can additionally embed the answers. We Q : \"What do you see?\" (Ground Truth : a3) a1 : \"A courtyard with flowers\" a2 : \"A restaurant kitchen\" a3 : \"A family with a stroller, tables for dining\" a4 : \"People waiting on a train\" a1 a2 a3 a4 Figure 4 : Our architecture for VQA: MCB with Attention and Answer Encoding base our approach on the proposed MCB with attention. As can be seen from Figure 4 , to deal with multiple variable-length answer choices, each choice is encoded using a word embedding and LSTM layers whose weights are shared across the candidates. In addition to using MCB with attention, we use an additional MCB pooling to merge the encoded answer choices with the multimodal representation of the original pipeline. The resulting embedding is projected to a classification vector with a dimension equal to the number of answers.",
"cite_spans": [],
"ref_spans": [
{
"start": 289,
"end": 297,
"text": "Figure 4",
"ref_id": null
},
{
"start": 439,
"end": 447,
"text": "Figure 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Architectures for VQA",
"sec_num": "3.2"
},
{
"text": "We base our grounding approach on the fullysupervised version of GroundeR (Rohrbach et al., 2016) . The overview of our model is shown in Figure 5 . The input to the model is a query natural language phrase and an image along with multiple proposal bounding boxes. The goal is to predict a bounding box which corresponds to the query phrase. We replace the concatenation of the visual representation and the encoded phrase in GroundeR with MCB to combine both modalities. In contrast to Rohrbach et al. (2016), we include a linear embedding of the visual representation and L 2 normalization of both input modalities, instead of batch normalization (Ioffe and Szegedy, 2015), which we found to be beneficial when using MCB for the grounding task.",
"cite_spans": [
{
"start": 65,
"end": 97,
"text": "GroundeR (Rohrbach et al., 2016)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 138,
"end": 146,
"text": "Figure 5",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Architecture for Visual Grounding",
"sec_num": "3.3"
},
{
"text": "We evaluate the benefit of MCB with a diverse set of ablations on two visual question answering datasets. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation on Visual Question Answering",
"sec_num": "4"
},
{
"text": "The Visual Question Answering (VQA) real-image dataset (Antol et al., 2015) consists of approximately 200,000 MSCOCO images (Lin et al., 2014) , with 3 questions per image and 10 answers per question. There are 3 data splits: train (80K images), validation (40K images), and test (80K images). Additionally, there is a 25% subset of test named test-dev. Accuracies for ablation experiments in this paper are reported on the test-dev data split. We use the VQA tool provided by Antol et al. (2015) for evaluation. We conducted most of our experiments on the openended real-image task. In Table 4 , we also report our multiple-choice real-image scores. The Visual Genome dataset (Krishna et al., 2016) uses 108,249 images from the intersection of YFCC100M (Thomee et al., 2015) and MSCOCO. For each image, an average of 17 question-answer pairs are collected. There are 1.7 million QA pairs of the 6W question types (what, where, when, who, why, and how). Compared to the VQA dataset, Visual Genome represents a more balanced distribution of the 6W question types. Moreover, the average question and answer lengths for Visual Genome are larger than the VQA dataset. To leverage the Visual Genome dataset as additional training data, we remove all the unnecessary words such as \"a\", \"the\", and \"it is\" from the answers to decrease the length of the answers and extract QA pairs whose answers are single-worded. The extracted data is filtered again based on the answer vocabulary space created from the VQA dataset, leaving us with additional 1M image-QA triplets.",
"cite_spans": [
{
"start": 55,
"end": 75,
"text": "(Antol et al., 2015)",
"ref_id": "BIBREF2"
},
{
"start": 124,
"end": 142,
"text": "(Lin et al., 2014)",
"ref_id": "BIBREF23"
},
{
"start": 477,
"end": 496,
"text": "Antol et al. (2015)",
"ref_id": "BIBREF2"
},
{
"start": 677,
"end": 699,
"text": "(Krishna et al., 2016)",
"ref_id": "BIBREF21"
},
{
"start": 754,
"end": 775,
"text": "(Thomee et al., 2015)",
"ref_id": "BIBREF36"
}
],
"ref_spans": [
{
"start": 587,
"end": 594,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Datasets",
"sec_num": "4.1"
},
{
"text": "The ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets",
"sec_num": "4.1"
},
{
"text": "We use the Adam solver with = 0.0007, \u03b2 1 = 0.9, \u03b2 2 = 0.999. We use dropout after the LSTM layers and in fully connected layers. For the experiments in Table 1 and 2, we train on the VQA train split, validate on the VQA validation split, and report results on the VQA test-dev split. We use early stopping: if the validation score does not improve for 50,000 iterations, we stop training and evaluate the best iteration on test-dev. For the Visual7W task, we use the same hyperparameters and training settings as in the VQA experiments. We use the splits from (Zhu et al., 2016) to train, validate, and test our models. We also compute accuracies on this data using their evaluation code.",
"cite_spans": [
{
"start": 561,
"end": 579,
"text": "(Zhu et al., 2016)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [
{
"start": 153,
"end": 160,
"text": "Table 1",
"ref_id": "TABREF8"
}
],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "4.2"
},
{
"text": "For VQA multiple choice, we train the open-ended models and take the argmax over the multiple choice answers at test time. For Visual7W, we use the answer encoding as described in Sec. 3.2.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "4.2"
},
{
"text": "We compare the performance of non-bilinear and bilinear pooling methods in Table 1 . We see that MCB pooling outperforms all non-bilinear pooling methods, such as eltwise sum, concatenation, and eltwise product. One could argue that the compact bilinear method simply has more parameters than the non-bilinear pooling methods, which contributes to its performance. We compensated for this by stacking fully connected layers (with 4096 units per layer, ReLU activation, and dropout) after the non-bilinear pooling methods to increase their number of parameters. However, even with similar parameter budgets, nonbilinear methods could not achieve the same accuracy as the MCB method. For example, the \"Concatenation + FC + FC\" pooling method has approximately 4096 2 + 4096 2 + 4096 \u00d7 3000 \u2248 46 million parameters, which matches the 48 million parameters available in MCB with d = 16000. However, the performance of the \"Concatenation + FC + FC\" method is only 57.10% compared to MCB's 59.83%.",
"cite_spans": [],
"ref_spans": [
{
"start": 75,
"end": 82,
"text": "Table 1",
"ref_id": "TABREF8"
}
],
"eq_spans": [],
"section": "Ablation Results",
"sec_num": "4.3"
},
{
"text": "Section 2 in (Wu et al., 2016) 81.0 38.4 45.2 59.2 -81.1 37.1 45.8 59.4 -SAN 79.3 36.6 46.1 58.7 ----58.9 - NMN (Andreas et al., 2016b) 81.2 38.0 44.0 58.6 -81.2 37.7 44.0 58.7 -AYN (Malinowski et al., 2016) 78.4 36.4 46.3 58.4 -78.2 36.3 46.3 58.4 -SMem (Xu and Saenko, 2016) 80.9 37.3 43.1 58.0 -80.9 37.5 43.5 58.2 -VQA team (Antol et al., 2015) 80.5 36.8 43.1 57.8 62.7 80.6 36.5 43.7 58.2 63.1 DPPnet (Noh et al., 2015) 80.7 37.2 41.7 57.2 -80.3 36.9 42.2 57.4 -iBOWIMG (Zhou et al., 2015) 76.5 35.0 42.6 55.7 -76.8 35.0 42.6 55.9 62.0 Table 4 : Open-ended and multiple-choice (MC) results on VQA test set (trained on train+val set) compared with state-of-the-art: accuracy in %. See Sec. 4.4.",
"cite_spans": [
{
"start": 13,
"end": 30,
"text": "(Wu et al., 2016)",
"ref_id": "BIBREF39"
},
{
"start": 108,
"end": 135,
"text": "NMN (Andreas et al., 2016b)",
"ref_id": null
},
{
"start": 182,
"end": 207,
"text": "(Malinowski et al., 2016)",
"ref_id": "BIBREF26"
},
{
"start": 255,
"end": 276,
"text": "(Xu and Saenko, 2016)",
"ref_id": "BIBREF40"
},
{
"start": 328,
"end": 348,
"text": "(Antol et al., 2015)",
"ref_id": "BIBREF2"
},
{
"start": 406,
"end": 424,
"text": "(Noh et al., 2015)",
"ref_id": "BIBREF28"
},
{
"start": 475,
"end": 494,
"text": "(Zhou et al., 2015)",
"ref_id": "BIBREF42"
}
],
"ref_spans": [
{
"start": 541,
"end": 548,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Ablation Results",
"sec_num": "4.3"
},
{
"text": "linear pooling has no impact on accuracy compared to full bilinear pooling. Section 3 in Table 1 demonstrates that the MCB brings improvements regardless of the image CNN used. We primarily use ResNet-152 in this paper, but MCB also improves performance if VGG-19 is used. Section 4 in Table 1 shows that our soft attention model works best with MCB pooling. In fact, attending to the Concatenation + FC layer has the same performance as not using attention at all, while attending to the MCB layer improves performance by 2.67 points. Table 2 compares different values of d, the output dimensionality of the multimodal compact bilinear feature. Approximating the bilinear feature with a 16,000-D vector yields the highest accuracy.",
"cite_spans": [],
"ref_spans": [
{
"start": 89,
"end": 96,
"text": "Table 1",
"ref_id": "TABREF8"
},
{
"start": 286,
"end": 293,
"text": "Table 1",
"ref_id": "TABREF8"
},
{
"start": 536,
"end": 543,
"text": "Table 2",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "Ablation Results",
"sec_num": "4.3"
},
{
"text": "We also evaluated models with multiple attention maps or channels. One attenion map achieves 64.67%, two 65.08% and four 64.24% accuracy (trained on train+val). Visual inspection of the gen-erated attention maps reveals that an ensembling or smoothing effect occurs when using multiple maps. Table 3 presents results for the Visual7W multiplechoice QA task. The MCB with attention model outperforms the previous state-of-the-art by 7.9 points overall and performs better in almost every category. Table 4 compares our approach with the state-of-theart on VQA test set. Our best single model uses MCB pooling with two attention maps. Additionally, we augment our training data with images and QA pairs from the Visual Genome dataset. We also concatenate the learned word embedding with pretrained GloVe vectors (Pennington et al., 2014) .",
"cite_spans": [
{
"start": 810,
"end": 835,
"text": "(Pennington et al., 2014)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [
{
"start": 292,
"end": 299,
"text": "Table 3",
"ref_id": "TABREF7"
},
{
"start": 497,
"end": 504,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Ablation Results",
"sec_num": "4.3"
},
{
"text": "Each model in our ensemble of 7 models uses MCB with attention. Some of the models were trained with data from Visual Genome, and some were trained with two attention maps. This ensem-Method Accuracy, % Plummer et al. (2015) 27.42 Hu et al. (2016b) 27.80 Plummer et al. 2016 ble is 1.8 points above the next best approach on the VQA open-ended task and 0.8 points above the next best approach on the multiple-choice task (on Testdev). Even without ensembles, our \"MCB + Genome + Att. + GloVe\" model still outperforms the next best result by 0.5 points, with an accuracy of 65.4% versus 64.9% on the open-ended task (on Test-dev).",
"cite_spans": [
{
"start": 203,
"end": 224,
"text": "Plummer et al. (2015)",
"ref_id": null
},
{
"start": 231,
"end": 248,
"text": "Hu et al. (2016b)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Comparison to State-of-the-Art",
"sec_num": "4.4"
},
{
"text": "5 Evaluation on Visual Grounding",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Comparison to State-of-the-Art",
"sec_num": "4.4"
},
{
"text": "We evaluate our visual grounding approach on two datasets. The first is Flickr30k Entities (Plummer et al., 2015) which consists of 31K images from Flickr30k dataset with 244K phrases localized with bounding boxes. We follow the experimental setup of Rohrbach et al. (2016), e.g. we use the same Selective Search (Uijlings et al., 2013) object proposals and the Fast R-CNN (Girshick, 2015) fine-tuned VGG16 features (Simonyan and Zisserman, 2014) . The second dataset is Refer-ItGame (Kazemzadeh et al., 2014) , which contains 20K images from IAPR TC-12 dataset (Grubinger et al., 2006) with segmented regions from SAIAPR-12 dataset (Escalante et al., 2010) and 120K associated natural language referring expressions. For Refer-ItGame we follow the experimental setup of Hu et al. (2016b) and rely on their ground-truth bounding boxes extracted around the segmentation masks. We use the Edge Box object proposals and visual features (VGG16 combined with the spatial features, which encode bounding box relative position) from Hu et al. (2016b) .",
"cite_spans": [
{
"start": 91,
"end": 113,
"text": "(Plummer et al., 2015)",
"ref_id": null
},
{
"start": 416,
"end": 446,
"text": "(Simonyan and Zisserman, 2014)",
"ref_id": "BIBREF32"
},
{
"start": 484,
"end": 509,
"text": "(Kazemzadeh et al., 2014)",
"ref_id": "BIBREF17"
},
{
"start": 562,
"end": 586,
"text": "(Grubinger et al., 2006)",
"ref_id": "BIBREF10"
},
{
"start": 633,
"end": 657,
"text": "(Escalante et al., 2010)",
"ref_id": "BIBREF6"
},
{
"start": 771,
"end": 788,
"text": "Hu et al. (2016b)",
"ref_id": "BIBREF14"
},
{
"start": 1026,
"end": 1043,
"text": "Hu et al. (2016b)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets",
"sec_num": "5.1"
},
{
"text": "In all experiments we use Adam solver (Kingma and Ba, 2014) with learning rate = 0.0001. The embedding size is 500 both for visual and language embeddings. We use d = 2048 in the MCB pooling, which we found to work best for the visual grounding task. The accuracy is measured as percentage of query phrases which have been localized correctly. The phrase is localized correctly if the predicted bounding box overlaps with the ground-truth bounding box by more than 50% intersection over union (IOU).",
"cite_spans": [
{
"start": 38,
"end": 59,
"text": "(Kingma and Ba, 2014)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "5.2"
},
{
"text": "Tables 5 and 6 summarize our results in the visual grounding task. We present multiple ablations of our proposed architecture. First, we replace the MCB with simple concatenation of the embedded visual feature and the embedded phrase, resulting in 46.5% on the Flickr30k Entities and 25.48% on the Refer-ItGame datasets. The results can be improved by replacing the concatenation with the element-wise product of both embedded features (47.41% and 27.80%). We can further slightly increase the performance by introducing additional 2048-D convolution after the element-wise product (47.86% and 27.98%). However, even with fewer parameters, our MCB pooling significantly improves over this baseline on both datasets, reaching state-of-the-art accuracy of 48.69% on Flickr30k Entities and 28.91% on ReferItGame dataset. Figure 6 (bottom) shows examples of improved phrase localization.",
"cite_spans": [],
"ref_spans": [
{
"start": 818,
"end": 826,
"text": "Figure 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "5.3"
},
{
"text": "Plummer et al. (2016) achieve higher accuracy of 50.89% when taking into account box size and color. We believe our approach would also benefit from such additional features.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We would like to thank Yang Gao and Oscar Beijbom for helpful discussions about Compact Bilinear Pooling. This work was supported by DARPA, AFRL, DoD MURI award N000141110688, NSF awards IIS-1427425 and IIS-1212798, and the Berkeley Artificial Intelligence Research (BAIR) Lab.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
},
{
"text": "We propose the Multimodal Compact Bilinear Pooling (MCB) to combine visual and text representations. For visual question answering, our architecture with attention and multiple MCBs gives significant improvements on two VQA datasets compared to state-of-the-art. In the visual grounding task, introducing MCB pooling leads to improved phrase localization accuracy, indicating better interaction between query phrase representations and visual rep-resentations of proposal bounding boxes. The code to replicate our experiments is available at https: //github.com/akirafukui/vqa-mcb.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Learning to compose neural networks for question answering",
"authors": [
{
"first": "Andreas",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics (NAACL)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "References [Andreas et al.2016a] Jacob Andreas, Marcus Rohrbach, Trevor Darrell, and Dan Klein. 2016a. Learning to compose neural networks for question answering. In Proceedings of the Conference of the North American Chapter of the Association for Computational Linguis- tics (NAACL).",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Neural module networks",
"authors": [
{
"first": "Andreas",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andreas et al.2016b] Jacob Andreas, Marcus Rohrbach, Trevor Darrell, and Dan Klein. 2016b. Neural module networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR).",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Vqa: Visual question answering",
"authors": [
{
"first": "",
"middle": [],
"last": "Antol",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the IEEE International Conference on Computer Vision (ICCV)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Antol et al.2015] Stanislaw Antol, Aishwarya Agrawal, Jiasen Lu, Margaret Mitchell, Dhruv Batra, C Lawrence Zitnick, and Devi Parikh. 2015. Vqa: Visual question answering. In Proceedings of the IEEE International Conference on Computer Vision (ICCV).",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Finding frequent items in data streams",
"authors": [
{
"first": "[",
"middle": [],
"last": "Charikar",
"suffix": ""
}
],
"year": 2002,
"venue": "Automata, languages and programming",
"volume": "",
"issue": "",
"pages": "693--703",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "[Charikar et al.2002] Moses Charikar, Kevin Chen, and Martin Farach-Colton. 2002. Finding frequent items in data streams. In Automata, languages and program- ming, pages 693-703. Springer.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "ImageNet: A Large-Scale Hierarchical Image Database",
"authors": [
{
"first": "",
"middle": [],
"last": "Deng",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Deng et al.2009] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei. 2009. ImageNet: A Large- Scale Hierarchical Image Database. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR).",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Decaf: A deep convolutional activation feature for generic visual recognition",
"authors": [
{
"first": "[",
"middle": [],
"last": "Donahue",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the International Conference on Machine Learning (ICML)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "[Donahue et al.2013] Jeff Donahue, Yangqing Jia, Oriol Vinyals, Judy Hoffman, Ning Zhang, Eric Tzeng, and Trevor Darrell. 2013. Decaf: A deep convolutional activation feature for generic visual recognition. In Proceedings of the International Conference on Ma- chine Learning (ICML).",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "The segmented and annotated iapr tc-12 benchmark. Computer Vision and Image Understanding",
"authors": [
{
"first": "",
"middle": [],
"last": "Escalante",
"suffix": ""
}
],
"year": 2010,
"venue": "Advances in Neural Information Processing Systems (NIPS)",
"volume": "114",
"issue": "",
"pages": "419--428",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Escalante et al.2010] Hugo Jair Escalante, Carlos A Hern\u00e1ndez, Jesus A Gonzalez, Aurelio L\u00f3pez-L\u00f3pez, Manuel Montes, Eduardo F Morales, L Enrique Sucar, Luis Villase\u00f1or, and Michael Grubinger. 2010. The segmented and annotated iapr tc-12 benchmark. Com- puter Vision and Image Understanding, 114(4):419- 428. [Frome et al.2013] Andrea Frome, Greg S Corrado, Jon Shlens, Samy Bengio, Jeff Dean, Tomas Mikolov, et al. 2013. Devise: A deep visual-semantic embedding model. In Advances in Neural Information Process- ing Systems (NIPS).",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Compact bilinear pooling",
"authors": [],
"year": 2016,
"venue": "Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "et al.2016] Yang Gao, Oscar Beijbom, Ning Zhang, and Trevor Darrell. 2016. Compact bilinear pooling. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR).",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Fast R-CNN",
"authors": [
{
"first": "Ross",
"middle": [],
"last": "Girshick",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the IEEE International Conference on Computer Vision (ICCV)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ross Girshick. 2015. Fast R-CNN. In Proceedings of the IEEE International Conference on Computer Vision (ICCV).",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Improving image-sentence embeddings using large weakly annotated photo collections",
"authors": [],
"year": 2014,
"venue": "Proceedings of the European Conference on Computer Vision (ECCV)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "et al.2014] Yunchao Gong, Liwei Wang, Micah Ho- dosh, Julia Hockenmaier, and Svetlana Lazebnik. 2014. Improving image-sentence embeddings using large weakly annotated photo collections. In Proceedings of the European Conference on Computer Vision (ECCV).",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "The iapr tc-12 benchmark: A new evaluation resource for visual information systems",
"authors": [
{
"first": "",
"middle": [],
"last": "Grubinger",
"suffix": ""
}
],
"year": 2004,
"venue": "International Workshop OntoImage",
"volume": "5",
"issue": "",
"pages": "2639--2664",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Grubinger et al.2006] Michael Grubinger, Paul Clough, Henning M\u00fcller, and Thomas Deselaers. 2006. The iapr tc-12 benchmark: A new evaluation resource for visual information systems. In International Workshop OntoImage, volume 5, page 10. [Hardoon et al.2004] David R Hardoon, Sandor Szedmak, and John Shawe-Taylor. 2004. Canonical correlation analysis: An overview with application to learning methods. Neural computation, 16(12):2639-2664.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Deep residual learning for image recognition",
"authors": [],
"year": 2015,
"venue": "Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "et al.2015] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2015. Deep residual learning for image recognition. In Proceedings of the IEEE Con- ference on Computer Vision and Pattern Recognition (CVPR).",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "From image descriptions to visual denotations: New similarity metrics for semantic inference over event descriptions",
"authors": [
{
"first": "",
"middle": [],
"last": "Hodosh",
"suffix": ""
}
],
"year": 2014,
"venue": "Transactions of the Association for Computational Linguistics (TACL)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "[Hodosh et al.2014] Peter Hodosh, Alice Young, Micah Lai, and Julia Hockenmaier. 2014. From image de- scriptions to visual denotations: New similarity met- rics for semantic inference over event descriptions. In Transactions of the Association for Computational Lin- guistics (TACL).",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Segmentation from natural language expressions",
"authors": [],
"year": 2016,
"venue": "Proceedings of the European Conference on Computer Vision (ECCV)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "et al.2016a] Ronghang Hu, Marcus Rohrbach, and Trevor Darrell. 2016a. Segmentation from natural language expressions. In Proceedings of the European Conference on Computer Vision (ECCV).",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Natural language object retrieval",
"authors": [],
"year": 2016,
"venue": "Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "et al.2016b] Ronghang Hu, Huazhe Xu, Marcus Rohrbach, Jiashi Feng, Kate Saenko, and Trevor Dar- rell. 2016b. Natural language object retrieval. In Pro- ceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR).",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Batch normalization: Accelerating deep network training by reducing internal covariate shift",
"authors": [
{
"first": "Ilija",
"middle": [],
"last": "Ilievski",
"suffix": ""
},
{
"first": "Shuicheng",
"middle": [],
"last": "Yan",
"suffix": ""
},
{
"first": "Jiashi",
"middle": [],
"last": "Feng",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the International Conference on Machine Learning (ICML)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1604.01485"
]
},
"num": null,
"urls": [],
"raw_text": "[Ilievski et al.2016] Ilija Ilievski, Shuicheng Yan, and Ji- ashi Feng. 2016. A focused dynamic attention model for visual question answering. arXiv:1604.01485. [Ioffe and Szegedy2015] Sergey Ioffe and Christian Szegedy. 2015. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In Proceedings of the International Conference on Machine Learning (ICML).",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Deep visual-semantic alignments for generating image descriptions",
"authors": [],
"year": 2015,
"venue": "Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "[Karpathy and Fei-Fei2015] Andrej Karpathy and Li Fei- Fei. 2015. Deep visual-semantic alignments for gener- ating image descriptions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recogni- tion (CVPR).",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Referit game: Referring to objects in photographs of natural scenes",
"authors": [],
"year": 2014,
"venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "et al.2014] Sahar Kazemzadeh, Vicente Or- donez, Mark Matten, and Tamara L. Berg. 2014. Referit game: Referring to objects in photographs of natural scenes. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP).",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Adam: A method for stochastic optimization",
"authors": [
{
"first": "Ba2014] Diederik",
"middle": [],
"last": "Kingma",
"suffix": ""
},
{
"first": "Jimmy",
"middle": [],
"last": "Ba",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the International Conference on Learning Representations (ICLR)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "and Ba2014] Diederik Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. In Proceedings of the International Conference on Learn- ing Representations (ICLR).",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Multimodal neural language models",
"authors": [
{
"first": "[",
"middle": [],
"last": "Kiros",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the International Conference on Machine Learning (ICML)",
"volume": "",
"issue": "",
"pages": "595--603",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "[Kiros et al.2014] Ryan Kiros, Ruslan Salakhutdinov, and Rich Zemel. 2014. Multimodal neural language mod- els. In Proceedings of the International Conference on Machine Learning (ICML), pages 595-603. [Kiros et al.2015] Ryan Kiros, Yukun Zhu, Ruslan Salakhutdinov, Richard S Zemel, Antonio Torralba, Raquel Urtasun, and Sanja Fidler. 2015. Skip-thought vectors. In Advances in Neural Information Processing Systems (NIPS).",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Fisher vectors derived from hybrid gaussian-laplacian mixture models for image annotation",
"authors": [
{
"first": "",
"middle": [],
"last": "Klein",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Klein et al.2015] Benjamin Klein, Guy Lev, Gil Sadeh, and Lior Wolf. 2015. Fisher vectors derived from hybrid gaussian-laplacian mixture models for image annotation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR).",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Visual genome: Connecting language and vision using crowdsourced dense image annotations",
"authors": [
{
"first": "[",
"middle": [],
"last": "Krishna",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1602.07332"
]
},
"num": null,
"urls": [],
"raw_text": "[Krishna et al.2016] Ranjay Krishna, Yuke Zhu, Oliver Groth, Justin Johnson, Kenji Hata, Joshua Kravitz, Stephanie Chen, Yannis Kalantidis, Li-Jia Li, David A Shamma, Michael Bernstein, and Li Fei-Fei. 2016. Visual genome: Connecting language and vi- sion using crowdsourced dense image annotations. arXiv:1602.07332.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Ask me anything: Dynamic memory networks for natural language processing",
"authors": [
{
"first": "",
"middle": [],
"last": "Kumar",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the International Conference on Machine Learning (ICML)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kumar et al.2016] Ankit Kumar, Ozan Irsoy, Jonathan Su, James Bradbury, Robert English, Brian Pierce, Peter Ondruska, Ishaan Gulrajani, and Richard Socher. 2016. Ask me anything: Dynamic memory networks for natu- ral language processing. In Proceedings of the Interna- tional Conference on Machine Learning (ICML).",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Microsoft coco: Common objects in context",
"authors": [],
"year": 2014,
"venue": "Proceedings of the European Conference on Computer Vision (ECCV)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "et al.2014] Tsung-Yi Lin, Michael Maire, Serge Be- longie, James Hays, Pietro Perona, Deva Ramanan, Piotr Doll\u00e1r, and C Lawrence Zitnick. 2014. Microsoft coco: Common objects in context. In Proceedings of the European Conference on Computer Vision (ECCV).",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Bilinear cnn models for finegrained visual recognition",
"authors": [
{
"first": "",
"middle": [],
"last": "Lin",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the IEEE International Conference on Computer Vision (ICCV)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "[Lin et al.2015] Tsung-Yu Lin, Aruni RoyChowdhury, and Subhransu Maji. 2015. Bilinear cnn models for fine- grained visual recognition. In Proceedings of the IEEE International Conference on Computer Vision (ICCV).",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Hierarchical Co-Attention for Visual Question Answering",
"authors": [
{
"first": "[",
"middle": [],
"last": "Lu",
"suffix": ""
}
],
"year": 2016,
"venue": "Advances in Neural Information Processing Systems (NIPS)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "[Lu et al.2016] Jiasen Lu, Jianwei Yang, Dhruv Batra, and Devi Parikh. 2016. Hierarchical Co-Attention for Vi- sual Question Answering. In Advances in Neural Infor- mation Processing Systems (NIPS).",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Deep captioning with multimodal recurrent neural networks (m-rnn)",
"authors": [
{
"first": "[",
"middle": [],
"last": "Malinowski",
"suffix": ""
}
],
"year": 2015,
"venue": "Ask Your Neurons: A Deep Learning Approach to Visual Question Answering",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1605.02697"
]
},
"num": null,
"urls": [],
"raw_text": "[Malinowski et al.2016] Mateusz Malinowski, Marcus Rohrbach, and Mario Fritz. 2016. Ask Your Neu- rons: A Deep Learning Approach to Visual Question Answering. arXiv: 1605.02697. [Mao et al.2015] Junhua Mao, Wei Xu, Yi Yang, Jiang Wang, Zhiheng Huang, and Alan Yuille. 2015. Deep captioning with multimodal recurrent neural networks (m-rnn). In Proceedings of the International Confer- ence on Learning Representations (ICLR).",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Multimodal deep learning",
"authors": [
{
"first": "",
"middle": [],
"last": "Ngiam",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the International Conference on Machine Learning (ICML)",
"volume": "",
"issue": "",
"pages": "689--696",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "[Ngiam et al.2011] Jiquan Ngiam, Aditya Khosla, Mingyu Kim, Juhan Nam, Honglak Lee, and Andrew Y Ng. 2011. Multimodal deep learning. In Proceedings of the International Conference on Machine Learning (ICML), pages 689-696.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Image question answering using convolutional neural network with dynamic parameter prediction",
"authors": [
{
"first": "",
"middle": [],
"last": "Noh",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "[Noh et al.2015] Hyeonwoo Noh, Paul Hongsuck Seo, and Bohyung Han. 2015. Image question answering using convolutional neural network with dynamic parameter prediction. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR).",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Glove: Global vectors for word representation",
"authors": [
{
"first": "",
"middle": [],
"last": "Pennington",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "[Pennington et al.2014] Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP).",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Flickr30k entities: Collecting region-to-phrase correspondences for richer image-tosentence models",
"authors": [
{
"first": "Rasmus",
"middle": [],
"last": "Pham",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Pagh",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 19th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD '13",
"volume": "",
"issue": "",
"pages": "239--247",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "[Pham and Pagh2013] Ninh Pham and Rasmus Pagh. 2013. Fast and scalable polynomial kernels via ex- plicit feature maps. In Proceedings of the 19th ACM SIGKDD International Conference on Knowledge Dis- covery and Data Mining, KDD '13, pages 239-247, New York, NY, USA. ACM. [Plummer et al.2015] Bryan Plummer, Liwei Wang, Chris Cervantes, Juan Caicedo, Julia Hockenmaier, and Svet- lana Lazebnik. 2015. Flickr30k entities: Collecting region-to-phrase correspondences for richer image-to- sentence models. In Proceedings of the IEEE Interna- tional Conference on Computer Vision (ICCV).",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Flickr30k entities: Collecting region-to-phrase correspondences for richer image-tosentence models",
"authors": [
{
"first": "[",
"middle": [],
"last": "Plummer",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the European Conference on Computer Vision (ECCV)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1505.04870v3"
]
},
"num": null,
"urls": [],
"raw_text": "[Plummer et al.2016] Bryan Plummer, Liwei Wang, Chris Cervantes, Juan Caicedo, Julia Hockenmaier, and Svet- lana Lazebnik. 2016. Flickr30k entities: Collecting region-to-phrase correspondences for richer image-to- sentence models. arXiv:1505.04870v3. [Rohrbach et al.2016] Anna Rohrbach, Marcus Rohrbach, Ronghang Hu, Trevor Darrell, and Bernt Schiele. 2016. Grounding of textual phrases in images by reconstruc- tion. In Proceedings of the European Conference on Computer Vision (ECCV).",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Very deep convolutional networks for large-scale image recognition",
"authors": [
{
"first": "Zisserman2014] Karen",
"middle": [],
"last": "Simonyan",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Zisserman",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "and Zisserman2014] Karen Simonyan and An- drew Zisserman. 2014. Very deep convolutional net- works for large-scale image recognition. In Proceed- ings of the International Conference on Learning Rep- resentations (ICLR).",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Grounded compositional semantics for finding and describing images with sentences",
"authors": [
{
"first": "",
"middle": [],
"last": "Socher",
"suffix": ""
}
],
"year": 2014,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "207--218",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "[Socher et al.2014] Richard Socher, Andrej Karpathy, Quoc V Le, Christopher D Manning, and Andrew Y Ng. 2014. Grounded compositional semantics for finding and describing images with sentences. Transactions of the Association for Computational Linguistics, 2:207- 218.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Sequence to sequence learning with neural networks",
"authors": [
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Oriol",
"middle": [],
"last": "Vinyals",
"suffix": ""
},
{
"first": "Quoc",
"middle": [
"V"
],
"last": "Le",
"suffix": ""
}
],
"year": 2014,
"venue": "Advances in Neural Information Processing Systems (NIPS)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "[Sutskever et al.2014] Ilya Sutskever, Oriol Vinyals, and Quoc V. V Le. 2014. Sequence to sequence learning with neural networks. In Advances in Neural Informa- tion Processing Systems (NIPS).",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Separating style and content with bilinear models",
"authors": [
{
"first": "Freeman2000] Joshua B",
"middle": [],
"last": "Tenenbaum",
"suffix": ""
},
{
"first": "William T",
"middle": [],
"last": "Freeman",
"suffix": ""
}
],
"year": 2000,
"venue": "Neural computation",
"volume": "12",
"issue": "6",
"pages": "1247--1283",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "and Freeman2000] Joshua B Tenenbaum and William T Freeman. 2000. Separating style and content with bilinear models. Neural computation, 12(6):1247- 1283.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Selective search for object recognition",
"authors": [
{
"first": "[",
"middle": [],
"last": "Thomee",
"suffix": ""
}
],
"year": 2013,
"venue": "The new data and new challenges in multimedia research. CoRR",
"volume": "104",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "[Thomee et al.2015] Bart Thomee, David A. Shamma, Gerald Friedland, Benjamin Elizalde, Karl Ni, Dou- glas Poland, Damian Borth, and Li-Jia Li. 2015. The new data and new challenges in multimedia research. CoRR, abs/1503.01817. [Uijlings et al.2013] Jasper RR Uijlings, Koen EA van de Sande, Theo Gevers, and Arnold WM Smeulders. 2013. Selective search for object recognition. International Journal of Computer Vision (IJCV), 104(2).",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Learning deep structure-preserving image-text embeddings",
"authors": [],
"year": 2016,
"venue": "Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "et al.2016] Liwei Wang, Yin Li, and Svetlana Lazebnik. 2016. Learning deep structure-preserving image-text embeddings. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recogni- tion (CVPR).",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "Wsabie: Scaling up to large vocabulary image annotation",
"authors": [
{
"first": "[",
"middle": [],
"last": "Weston",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the International Joint Conference on Artificial Intelligence (IJCAI)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "[Weston et al.2011] Jason Weston, Samy Bengio, and Nicolas Usunier. 2011. Wsabie: Scaling up to large vocabulary image annotation. In Proceedings of the In- ternational Joint Conference on Artificial Intelligence (IJCAI).",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "Ask Me Anything: Free-form Visual Question Answering Based on Knowledge from External Sources",
"authors": [],
"year": 2016,
"venue": "Proceedings of the International Conference on Machine Learning (ICML)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "et al.2016] Qi Wu, Peng Wang, Chunhua Shen, An- ton van den Hengel, and Anthony Dick. 2016. Ask Me Anything: Free-form Visual Question Answering Based on Knowledge from External Sources. In Proc. IEEE Conf. Computer Vision Pattern Recognition. [Xiong et al.2016] Caiming Xiong, Stephen Merity, and Richard Socher. 2016. Dynamic memory networks for visual and textual question answering. In Proceedings of the International Conference on Machine Learning (ICML).",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "Ask, attend and answer: Exploring question-guided spatial attention for visual question answering",
"authors": [
{
"first": "Saenko2016] Huijuan",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Kate",
"middle": [],
"last": "Saenko",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the European Conference on Computer Vision (ECCV)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "and Saenko2016] Huijuan Xu and Kate Saenko. 2016. Ask, attend and answer: Exploring question-guided spa- tial attention for visual question answering. In Proceed- ings of the European Conference on Computer Vision (ECCV).",
"links": null
},
"BIBREF41": {
"ref_id": "b41",
"title": "Show, attend and tell: Neural image caption generation with visual attention",
"authors": [],
"year": 2015,
"venue": "Proceedings of the International Conference on Machine Learning (ICML)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "et al.2015] Kelvin Xu, Jimmy Ba, Ryan Kiros, Aaron Courville, Ruslan Salakhutdinov, Richard Zemel, and Yoshua Bengio. 2015. Show, attend and tell: Neural image caption generation with visual attention. Pro- ceedings of the International Conference on Machine Learning (ICML).",
"links": null
},
"BIBREF42": {
"ref_id": "b42",
"title": "Visual7W: Grounded Question Answering in Images",
"authors": [],
"year": 2015,
"venue": "Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1511.02274"
]
},
"num": null,
"urls": [],
"raw_text": "et al.2015] Zichao Yang, Xiaodong He, Jianfeng Gao, Li Deng, and Alex Smola. 2015. Stacked attention networks for image question answering. arXiv:1511.02274. [Zhou et al.2015] Bolei Zhou, Yuandong Tian, Sainba- yar Sukhbaatar, Arthur Szlam, and Rob Fergus. 2015. Simple baseline for visual question answering. arXiv:1512.02167. [Zhu et al.2016] Yuke Zhu, Oliver Groth, Michael Bern- stein, and Li Fei-Fei. 2016. Visual7W: Grounded Question Answering in Images. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR).",
"links": null
},
"BIBREF43": {
"ref_id": "b43",
"title": "Edge boxes: Locating object proposals from edges",
"authors": [
{
"first": "Lawrence",
"middle": [],
"last": "Zitnick",
"suffix": ""
},
{
"first": "Piotr",
"middle": [],
"last": "Doll\u00e1r",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the European Conference on Computer Vision (ECCV)",
"volume": "",
"issue": "",
"pages": "391--405",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "[Zitnick and Doll\u00e1r2014] C Lawrence Zitnick and Piotr Doll\u00e1r. 2014. Edge boxes: Locating object propos- als from edges. In Proceedings of the European Con- ference on Computer Vision (ECCV), pages 391-405. Springer.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"text": "Multimodal Compact Bilinear Pooling for visual question answering.",
"uris": null,
"type_str": "figure"
},
"FIGREF1": {
"num": null,
"text": "Our Architecture for Grounding withMCB (Sec. 3.3)",
"uris": null,
"type_str": "figure"
},
"TABREF1": {
"text": "Our architecture for VQA: Multimodal Compact Bilinear (MCB) with Attention. Conv implies convolutional layers and FC implies fully connected layers. For details see Sec. 3.2.",
"type_str": "table",
"content": "<table><tr><td>the giraffe? woman feeding What is the</td><td>WE, LSTM (ResNet152)</td><td>Tile CNN</td><td>2048x14x14 2048x14x14</td><td>Multimodal Compact Bilinear</td><td>16k x14x14</td><td>Conv, Relu</td><td>512 x 14 x 14</td><td>Conv</td><td>1 x 14 x 14</td><td>Softmax</td><td>Weighted Sum</td><td>2048</td></tr><tr><td/><td/><td/><td/><td/><td>2048 2048</td><td colspan=\"4\">Multimodal Compact Bilinear</td><td/><td>16k</td><td>FC</td><td>3000</td><td>Softmax</td><td>\"Carrot\"</td></tr><tr><td>Figure 3:</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr></table>",
"num": null,
"html": null
},
"TABREF4": {
"text": "Visual7W dataset (Zhu et al., 2016) is a part of the Visual Genome. Visual7W adds a 7th which question category to accommodate visual answers,",
"type_str": "table",
"content": "<table><tr><td>Method</td><td>Accuracy</td></tr><tr><td>Element-wise Sum</td><td>56.50</td></tr><tr><td>Concatenation</td><td>57.49</td></tr><tr><td>Concatenation + FC</td><td>58.40</td></tr><tr><td>Concatenation + FC + FC</td><td>57.10</td></tr><tr><td>Element-wise Product</td><td>58.57</td></tr><tr><td>Element-wise Product + FC</td><td>56.44</td></tr><tr><td>Element-wise Product + FC + FC</td><td>57.88</td></tr><tr><td>MCB (2048 \u00d7 2048 \u2192 16K) Full Bilinear (128 \u00d7 128 \u2192 16K) MCB (128 \u00d7 128 \u2192 4K) Element-wise Product with VGG-19</td><td>59.83 58.46 58.69 55.97</td></tr><tr><td>MCB (d = 16K) with VGG-19</td><td>57.05</td></tr><tr><td>Concatenation + FC with Attention</td><td>58.36</td></tr><tr><td>MCB (d = 16K) with Attention</td><td>62.50</td></tr><tr><td colspan=\"2\">Table 1: Comparison of multimodal pooling methods.</td></tr><tr><td colspan=\"2\">Models are trained on the VQA train split and tested</td></tr><tr><td>on test-dev.</td><td/></tr><tr><td colspan=\"2\">but we only evaluate the models on the Telling task</td></tr><tr><td colspan=\"2\">which involves 6W questions. The natural language</td></tr><tr><td colspan=\"2\">answers in Visual7W are in a multiple-choice format</td></tr><tr><td colspan=\"2\">and each question comes with four answer candidates,</td></tr><tr><td colspan=\"2\">with only one being the correct answer. Visual7W</td></tr><tr><td colspan=\"2\">is composed of 47,300 images from MSCOCO and</td></tr><tr><td>there are a total of 139,868 QA pairs.</td><td/></tr></table>",
"num": null,
"html": null
},
"TABREF6": {
"text": "Accuracies for different values of d, the dimension of the compact bilinear feature. Models are trained on the VQA train split and tested on testdev. Details in Sec. 4.3.",
"type_str": "table",
"content": "<table><tr><td>Method</td><td>What Where When Who Why How Avg</td></tr><tr><td colspan=\"2\">Zhu et al. Concat+Att. 47.8 56.9 74.1 62.3 52.7 51.2 52.8 51.5 57.0 75.0 59.5 55.5 49.8 54.3 MCB+Att. 60.3 70.4 79.5 69.2 58.2 51.1 62.2</td></tr></table>",
"num": null,
"html": null
},
"TABREF7": {
"text": "",
"type_str": "table",
"content": "<table><tr><td>: Multiple-choice QA tasks accuracy (%) on</td></tr><tr><td>Visual7W test set.</td></tr></table>",
"num": null,
"html": null
},
"TABREF8": {
"text": "",
"type_str": "table",
"content": "<table><tr><td>also shows that compact bi-</td></tr></table>",
"num": null,
"html": null
},
"TABREF10": {
"text": "Grounding accuracy on Flickr30k Entities dataset.",
"type_str": "table",
"content": "<table><tr><td>Method</td><td>Accuracy, %</td></tr><tr><td>Hu et al. (2016b)</td><td>17.93</td></tr><tr><td>Rohrbach et al. (2016)</td><td>26.93</td></tr><tr><td>Concatenation</td><td>25.48</td></tr><tr><td>Element-wise Product</td><td>27.80</td></tr><tr><td>Element-wise Product + Conv</td><td>27.98</td></tr><tr><td>MCB</td><td>28.91</td></tr></table>",
"num": null,
"html": null
},
"TABREF11": {
"text": "Grounding accuracy on ReferItGame dataset.",
"type_str": "table",
"content": "<table/>",
"num": null,
"html": null
}
}
}
}