| { |
| "paper_id": "K18-1039", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T07:10:09.982357Z" |
| }, |
| "title": "Lessons learned in multilingual grounded language learning", |
| "authors": [ |
| { |
| "first": "\u00c1kos", |
| "middle": [], |
| "last": "K\u00e1d\u00e1r", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "" |
| }, |
| { |
| "first": "Desmond", |
| "middle": [], |
| "last": "Elliott", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "" |
| }, |
| { |
| "first": "Marc-Alexandre", |
| "middle": [], |
| "last": "C\u00f4t\u00e9", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "" |
| }, |
| { |
| "first": "Grzegorz", |
| "middle": [], |
| "last": "Chrupa\u0142a", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "g.chrupala@uvt.nl" |
| }, |
| { |
| "first": "Afra", |
| "middle": [], |
| "last": "Alishahi", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "a.alishahi@uvt.nl" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "Recent work has shown how to learn better visual-semantic embeddings by leveraging image descriptions in more than one language. Here, we investigate in detail which conditions affect the performance of this type of grounded language learning model. We show that multilingual training improves over bilingual training, and that low-resource languages benefit from training with higher-resource languages. We demonstrate that a multilingual model can be trained equally well on either translations or comparable sentence pairs, and that annotating the same set of images in multiple language enables further improvements via an additional caption-caption ranking objective.", |
| "pdf_parse": { |
| "paper_id": "K18-1039", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "Recent work has shown how to learn better visual-semantic embeddings by leveraging image descriptions in more than one language. Here, we investigate in detail which conditions affect the performance of this type of grounded language learning model. We show that multilingual training improves over bilingual training, and that low-resource languages benefit from training with higher-resource languages. We demonstrate that a multilingual model can be trained equally well on either translations or comparable sentence pairs, and that annotating the same set of images in multiple language enables further improvements via an additional caption-caption ranking objective.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "Multimodal representation learning is largely motivated by evidence of perceptual grounding in human concept acquisition and representation (Barsalou et al., 2003) . It has been shown that visually grounded word and sentence-representations Baroni, 2016; Elliott and K\u00e1d\u00e1r, 2017; Kiela et al., 2017; Yoo et al., 2017) improve performance on the downstream tasks of paraphrase identification, semantic entailment, and multimodal machine translation (Dolan et al., 2004; Marelli et al., 2014; Specia et al., 2016) . Multilingual sentence representations have also been successfully applied to many-languages-to-one character-level machine translation (Chung et al., 2016) and multilingual dependency parsing (Ammar et al., 2016) .", |
| "cite_spans": [ |
| { |
| "start": 140, |
| "end": 163, |
| "text": "(Barsalou et al., 2003)", |
| "ref_id": "BIBREF6" |
| }, |
| { |
| "start": 241, |
| "end": 254, |
| "text": "Baroni, 2016;", |
| "ref_id": "BIBREF5" |
| }, |
| { |
| "start": 255, |
| "end": 279, |
| "text": "Elliott and K\u00e1d\u00e1r, 2017;", |
| "ref_id": "BIBREF15" |
| }, |
| { |
| "start": 280, |
| "end": 299, |
| "text": "Kiela et al., 2017;", |
| "ref_id": "BIBREF30" |
| }, |
| { |
| "start": 300, |
| "end": 317, |
| "text": "Yoo et al., 2017)", |
| "ref_id": "BIBREF53" |
| }, |
| { |
| "start": 448, |
| "end": 468, |
| "text": "(Dolan et al., 2004;", |
| "ref_id": "BIBREF12" |
| }, |
| { |
| "start": 469, |
| "end": 490, |
| "text": "Marelli et al., 2014;", |
| "ref_id": "BIBREF40" |
| }, |
| { |
| "start": 491, |
| "end": 511, |
| "text": "Specia et al., 2016)", |
| "ref_id": "BIBREF48" |
| }, |
| { |
| "start": 649, |
| "end": 669, |
| "text": "(Chung et al., 2016)", |
| "ref_id": "BIBREF9" |
| }, |
| { |
| "start": 706, |
| "end": 726, |
| "text": "(Ammar et al., 2016)", |
| "ref_id": "BIBREF2" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Recently, Gella et al. (2017) proposed to learn both bilingual and multimodal sentence representations using images paired with captions independently collected in English and German. Their results show that bilingual training improves imagesentence ranking performance over a monolingual * Work carried out at the University of Edinburgh. baseline, and it improves performance on semantic textual similarity benchmarks (Agirre et al., 2014 (Agirre et al., , 2015 . These findings suggest that it may be beneficial to consider another language as another modality in a monolingual grounded language learning model. In the grounded learning scenario, descriptions of an image in multiple languages can be considered as multiple views of the same or closely related data. These additional views can help overcome the problems of data sparsity, and have practical implications for efficiently collecting imagetext datasets in different languages. In real-life applications, many tasks and domains can involve code switching (Barman et al., 2014) , which is easier to deal with using a multilingual model. Furthermore, it is more convenient to maintain a single multilingual system than one system for each considered language. However, there is a need for a systematic exploration of the conditions under which it is useful to add additional views of the data. We investigate the impact of the following conditions on the performance of a multilingual grounded language learning model in sentence and image retrieval tasks:", |
| "cite_spans": [ |
| { |
| "start": 10, |
| "end": 29, |
| "text": "Gella et al. (2017)", |
| "ref_id": "BIBREF21" |
| }, |
| { |
| "start": 420, |
| "end": 440, |
| "text": "(Agirre et al., 2014", |
| "ref_id": "BIBREF1" |
| }, |
| { |
| "start": 441, |
| "end": 463, |
| "text": "(Agirre et al., , 2015", |
| "ref_id": "BIBREF0" |
| }, |
| { |
| "start": 1021, |
| "end": 1042, |
| "text": "(Barman et al., 2014)", |
| "ref_id": "BIBREF4" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Additional languages. Multilingual models have not been explored yet in a multimodal setting. We investigate the contribution of adding more than one language by performing bilingual experiments on English and German (Section 5) as well as adding French and Czech captioned images (Section 6).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "We assess the performance of a multilingual models trained using either captions that are translations of each other, or captions that are independently collected in different languages for the same set of images. The two scenarios are illustrated in captions are collected in different languages. Such disjoint settings have been explored in pivot-based multimodal representation learning (Funaki and Nakayama, 2015; Rajendran et al., 2015) or zero-shot multi-modal machine translation (Nakayama and Nishida, 2017) . We compare translated vs. independently collected captions in Sections 5.2 and 6.1, and overlapping vs. disjoint images in Section 5.3.", |
| "cite_spans": [ |
| { |
| "start": 390, |
| "end": 417, |
| "text": "(Funaki and Nakayama, 2015;", |
| "ref_id": "BIBREF20" |
| }, |
| { |
| "start": 418, |
| "end": 441, |
| "text": "Rajendran et al., 2015)", |
| "ref_id": "BIBREF46" |
| }, |
| { |
| "start": 487, |
| "end": 515, |
| "text": "(Nakayama and Nishida, 2017)", |
| "ref_id": "BIBREF42" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Data alignment:", |
| "sec_num": null |
| }, |
| { |
| "text": "High-to-low resource transfer: In Section 6.2 we investigate whether low-resource languages benefit from jointly training on larger data sets from higher-resource languages. This type of transfer has previously been shown to be effective in machine translation (e.g., Zoph et al., 2016) .", |
| "cite_spans": [ |
| { |
| "start": 268, |
| "end": 286, |
| "text": "Zoph et al., 2016)", |
| "ref_id": "BIBREF54" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Data alignment:", |
| "sec_num": null |
| }, |
| { |
| "text": "Training objective: In addition to learning to map images to sentences, we study the effect of also learning relationships between captions of the same image in different languages Gella et al. (2017) . We assess the contribution of such a caption-caption ranking objective throughout our experiments.", |
| "cite_spans": [ |
| { |
| "start": 181, |
| "end": 200, |
| "text": "Gella et al. (2017)", |
| "ref_id": "BIBREF21" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Data alignment:", |
| "sec_num": null |
| }, |
| { |
| "text": "Our results show that multilingual joint train-ing improves upon bilingual joint training, and that grounded sentence representations for a lowresource language can be substantially improved with data from different high-resource languages.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Data alignment:", |
| "sec_num": null |
| }, |
| { |
| "text": "Our results suggest that independently-collected captions are more useful than translated captions, for the task of learning multilingual multimodal sentence embeddings. Finally, we recommend to collect captions for the same set of images in multiple languages, due to the benefits of the additional caption-caption ranking objective function.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Data alignment:", |
| "sec_num": null |
| }, |
| { |
| "text": "Learning visually grounded word-representations has been an active area of research in the fields of multi-modal semantics and cross-situational wordlearning. Such perceptually-grounded word representations have been shown to lead to higher correlation with human judgements on word-similarity benchmarks such as WordSim353 (Finkelstein et al., 2001) or SimLex999 (Hill et al., 2015) compared to uni-modal representations (K\u00e1d\u00e1r et al., 2015; Bruni et al., 2014; Kiela and Bottou, 2014) .", |
| "cite_spans": [ |
| { |
| "start": 324, |
| "end": 350, |
| "text": "(Finkelstein et al., 2001)", |
| "ref_id": "BIBREF17" |
| }, |
| { |
| "start": 364, |
| "end": 383, |
| "text": "(Hill et al., 2015)", |
| "ref_id": "BIBREF24" |
| }, |
| { |
| "start": 422, |
| "end": 442, |
| "text": "(K\u00e1d\u00e1r et al., 2015;", |
| "ref_id": "BIBREF27" |
| }, |
| { |
| "start": 443, |
| "end": 462, |
| "text": "Bruni et al., 2014;", |
| "ref_id": "BIBREF7" |
| }, |
| { |
| "start": 463, |
| "end": 486, |
| "text": "Kiela and Bottou, 2014)", |
| "ref_id": "BIBREF29" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Grounded representations of sentences that are learned from image-caption data sets also improve performance on a number of sentence-level tasks (Kiela et al., 2017; Yoo et al., 2017) when used as additional features to skip-thought vectors (Kiros et al., 2015) . The model architectures used for these studies have the same overall structure as our model and coincide with image-sentence retrieval systems (Kiros et al., 2014; Karpathy and Fei-Fei, 2015 ): a pre-trained CNN is fixed or fine-tuned as image feature extractor, followed by a learned transformation, while sentence representations are learned by a randomly initialized recurrent neural network. These models are trained to push the true image-caption pairs closer together, and the false image-caption pairs further from each other, in a joint embedding space.", |
| "cite_spans": [ |
| { |
| "start": 145, |
| "end": 165, |
| "text": "(Kiela et al., 2017;", |
| "ref_id": "BIBREF30" |
| }, |
| { |
| "start": 166, |
| "end": 183, |
| "text": "Yoo et al., 2017)", |
| "ref_id": "BIBREF53" |
| }, |
| { |
| "start": 241, |
| "end": 261, |
| "text": "(Kiros et al., 2015)", |
| "ref_id": "BIBREF34" |
| }, |
| { |
| "start": 407, |
| "end": 427, |
| "text": "(Kiros et al., 2014;", |
| "ref_id": "BIBREF33" |
| }, |
| { |
| "start": 428, |
| "end": 454, |
| "text": "Karpathy and Fei-Fei, 2015", |
| "ref_id": "BIBREF28" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "In addition to learning grounded representations for image-sentence ranking, joint vision and language systems have been proposed to solve a wide range of tasks across modalities such as image captioning (Mao et al., 2014; Vinyals et al., 2015; Xu et al., 2015) , visual question answering (Antol et al., 2015; Fukui et al., 2016; Jabri et al., 2016 ), text-toimage synthesis (Reed et al., 2016) and multimodal machine translation (Libovicky and Helcl, 2017; Elliott and K\u00e1d\u00e1r, 2017) .", |
| "cite_spans": [ |
| { |
| "start": 204, |
| "end": 222, |
| "text": "(Mao et al., 2014;", |
| "ref_id": "BIBREF39" |
| }, |
| { |
| "start": 223, |
| "end": 244, |
| "text": "Vinyals et al., 2015;", |
| "ref_id": "BIBREF50" |
| }, |
| { |
| "start": 245, |
| "end": 261, |
| "text": "Xu et al., 2015)", |
| "ref_id": null |
| }, |
| { |
| "start": 290, |
| "end": 310, |
| "text": "(Antol et al., 2015;", |
| "ref_id": "BIBREF3" |
| }, |
| { |
| "start": 311, |
| "end": 330, |
| "text": "Fukui et al., 2016;", |
| "ref_id": "BIBREF19" |
| }, |
| { |
| "start": 331, |
| "end": 349, |
| "text": "Jabri et al., 2016", |
| "ref_id": "BIBREF25" |
| }, |
| { |
| "start": 376, |
| "end": 395, |
| "text": "(Reed et al., 2016)", |
| "ref_id": "BIBREF47" |
| }, |
| { |
| "start": 431, |
| "end": 458, |
| "text": "(Libovicky and Helcl, 2017;", |
| "ref_id": "BIBREF37" |
| }, |
| { |
| "start": 459, |
| "end": 483, |
| "text": "Elliott and K\u00e1d\u00e1r, 2017)", |
| "ref_id": "BIBREF15" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Our work is also closely related to multilingual joint representation learning. In this scenario, a single model is trained to solve a task across multiple languages. Ammar et al. (2016) train a multilingual dependency parser on the Universal Dependencies treebank (Nivre et al., 2015) and show that on average the single multilingual model outperforms the monolingual baselines. Johnson et al.", |
| "cite_spans": [ |
| { |
| "start": 167, |
| "end": 186, |
| "text": "Ammar et al. (2016)", |
| "ref_id": "BIBREF2" |
| }, |
| { |
| "start": 265, |
| "end": 285, |
| "text": "(Nivre et al., 2015)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "(2016) present a zero-shot neural machine translation model that is jointly trained on language pairs A \u2194 B and B \u2194 C and show that the model is capable of performing well on the unseen language pair A \u2194 C. Lee et al. (2017) find that jointly training a many-languages-to-one translation model on unsegmented character sequences improves BLEU scores compared to monolingual training. They also show evidence that the model can handle intrasentence code-switching. Peters et al. (2017) train a multilingual sequence-to-sequence translation architecture on grapheme-to-phoneme conversion using more than 300 languages. They report better performance when adding multiple languages, even those which are not present in the test data. Finally, Require: p: task switching probability. (\u00d6stling and Tiedemann, 2016) and can successfully predict linguistic typology features (Malaviya et al., 2017) .", |
| "cite_spans": [ |
| { |
| "start": 207, |
| "end": 224, |
| "text": "Lee et al. (2017)", |
| "ref_id": "BIBREF35" |
| }, |
| { |
| "start": 780, |
| "end": 809, |
| "text": "(\u00d6stling and Tiedemann, 2016)", |
| "ref_id": "BIBREF44" |
| }, |
| { |
| "start": 868, |
| "end": 891, |
| "text": "(Malaviya et al., 2017)", |
| "ref_id": "BIBREF38" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "D c2i : datasets D 1 . . . D k of image-caption pairs < c, i > for all k languages. D c2c : data set of all possible caption pairs < c a , c b > for all k languages. \u03c6(c, \u03b8 \u03c6 ): caption encoder \u03c8(i, \u03b8 \u03c6 ): image encoder while not stopping criterion do T \u223c Bern(p) if T = 1 then D n \u223c D c2i < c, i >\u223c D n a \u2190 \u03c6(c, \u03b8 \u03c6 ) b \u2190 \u03c8(i, \u03b8 \u03c8 ) else < c a , c b >\u223c D c2c a \u2190 \u03c6(c a , \u03b8 \u03c6 ) b \u2190 \u03c6(c b , \u03b8 \u03c6 ) end if [\u03b8 \u03c6 ; \u03b8 \u03c8 ] \u2190 SGD(\u2207 [\u03b8 \u03c6 ;\u03b8 \u03c8 ] J (a, b)) end while", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "In the vision and language domain, multilingualmultimodal sentence representation learning has been limited so far to two languages. The joint training of models on English and German data has been shown to outperform monolingual baselines on image-sentence ranking and semantic textual similarity tasks (Gella et al., 2017; Calixto et al., 2017) . Recently Harwath et al. (2018) also showed the benefit of joint bilingual training in the domain of speech-to-image and image-to-speech retrieval using English and Hindi data.", |
| "cite_spans": [ |
| { |
| "start": 304, |
| "end": 324, |
| "text": "(Gella et al., 2017;", |
| "ref_id": "BIBREF21" |
| }, |
| { |
| "start": 325, |
| "end": 346, |
| "text": "Calixto et al., 2017)", |
| "ref_id": "BIBREF8" |
| }, |
| { |
| "start": 358, |
| "end": 379, |
| "text": "Harwath et al. (2018)", |
| "ref_id": "BIBREF22" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "We train a standard model of grounded language learning which projects images and their textual descriptions into the same space (Kiros et al., 2014; Karpathy and Fei-Fei, 2015) . The training procedure is illustrated by the pseudo-code in Figure 2. Images i are encoded by a fixed pre-trained CNN followed by a learned affine transformation \u03c8(i, \u03b8 \u03c8 ), and captions c are encoded by a randomly initialized RNN \u03c6(c, \u03b8 \u03c6 ). The model learns to minimize the distance between pairs <a, b> using a max-of-hinges ranking loss (Faghri et al., 2017) :", |
| "cite_spans": [ |
| { |
| "start": 129, |
| "end": 149, |
| "text": "(Kiros et al., 2014;", |
| "ref_id": "BIBREF33" |
| }, |
| { |
| "start": 150, |
| "end": 177, |
| "text": "Karpathy and Fei-Fei, 2015)", |
| "ref_id": "BIBREF28" |
| }, |
| { |
| "start": 521, |
| "end": 542, |
| "text": "(Faghri et al., 2017)", |
| "ref_id": "BIBREF16" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 240, |
| "end": 246, |
| "text": "Figure", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Multilingual grounded learning", |
| "sec_num": "3" |
| }, |
| { |
| "text": "J (a, b) = max <\u00e2,b> [max(0, \u03b1 \u2212 s(a, b) + s(\u00e2, b))] + max <a,b> [max(0, \u03b1 \u2212 s(a, b) + s(a,b))]", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Multilingual grounded learning", |
| "sec_num": "3" |
| }, |
| { |
| "text": "where < a, b > are the true pairs, and < a,b > and <\u00e2, b > are all possible contrastive pairs in the mini-batch. The pairs either consists of imagecaption pairs < i, c >, where the model solves a caption-image c2i ranking task, or pairs of captions in multiple languages belonging to the same image < c a , c b >, where the model solves a captioncaption c2c ranking task (Gella et al., 2017) . Our monolingual models are trained to minimize the caption-image ranking objective c2i on the training set. The multilingual models are trained to minimize the ranking loss for the set of all languages L in the collection: at each iteration the model is either updated for the c2i objective or the captioncaption c2c objective given either < c l , i > or a < c k a , c m b > pair in languages l, k, m, . . . \u2208 L. All models are trained by first selecting a task, either c2i or c2c. In the c2i case, a language is sampled at random followed by sampling a random batch; in the c2c case, all possible < c a , c b > pairs across all languages are treated as a single data set. All of the model parameters are shared across all tasks and languages.", |
| "cite_spans": [ |
| { |
| "start": 371, |
| "end": 391, |
| "text": "(Gella et al., 2017)", |
| "ref_id": "BIBREF21" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Multilingual grounded learning", |
| "sec_num": "3" |
| }, |
| { |
| "text": "Implementation. We build our model on the Py-Torch implementation of the VSE++ model (Faghri et al., 2017) . Images are represented by the 2048D average-pool features extracted from the ResNet50 architecture (He et al., 2016) trained on ImageNet (Deng et al., 2009) ; this is followed by a trained linear layer W I \u2208 R 2048\u00d71024 . Other implementation details follow (Faghri et al., 2017) : sentences are represented as the final hidden state of a GRU (Chung et al., 2014) with 1024 units and 300 dimensional word-embeddings trained from scratch. We use a single word embedding matrix containing the union of all words in all considered languages. The similarity function s in the ranking loss is cosine similarity. We 2 normalize both the caption and image representations. The model is trained with the Adam optimizer (Kingma and Ba, 2014) using default parameters and learning-rate of 2e-4. We train the model with an early stopping crite- rion, which is to maximise the sum of the imagesentence recall scores R@1, R@5, R@10 on the validation set with patience of 10 evaluations. In the monolingual setting the stopping criterion is evaluated at the end of each epoch, whereas in the multilingual setup it is evaluated every 500 iterations. The probability of switching between the c2i and c2c tasks is set to 0.5. Batches from all data sets are sampled by shuffling the full dataset, going through each batch and re-shuffling when exhausted. The sentence-pair dataset used to train the c2c ranking model for languages is generated as follows. For a given image i, a set of languages 1 \u2022 \u2022 \u2022 , and a set of captions C i 1 , . . . , C i associated with an image i, we generate the set of all possible combinations of size 2 from caption sets C i and add the Cartesian product between all resulting pairs C i m \u00d7 C i n in C i to the training set.", |
| "cite_spans": [ |
| { |
| "start": 85, |
| "end": 106, |
| "text": "(Faghri et al., 2017)", |
| "ref_id": "BIBREF16" |
| }, |
| { |
| "start": 208, |
| "end": 225, |
| "text": "(He et al., 2016)", |
| "ref_id": "BIBREF23" |
| }, |
| { |
| "start": 246, |
| "end": 265, |
| "text": "(Deng et al., 2009)", |
| "ref_id": "BIBREF11" |
| }, |
| { |
| "start": 367, |
| "end": 388, |
| "text": "(Faghri et al., 2017)", |
| "ref_id": "BIBREF16" |
| }, |
| { |
| "start": 452, |
| "end": 472, |
| "text": "(Chung et al., 2014)", |
| "ref_id": "BIBREF10" |
| }, |
| { |
| "start": 820, |
| "end": 841, |
| "text": "(Kingma and Ba, 2014)", |
| "ref_id": "BIBREF32" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Multilingual grounded learning", |
| "sec_num": "3" |
| }, |
| { |
| "text": "Datasets. We train and evaluate our models on the translation and comparable portions of the Multi30K dataset (Elliott et al., 2016 . The translation portion (a low-resource dataset) contains 29K images, each described in one English caption with German, French, and Czech translations. The comparable portion (a higher-resource dataset) contains the same 29K images paired with five English and five German descriptions collected independently. Figure 1 presents an example of the translation and comparable portions of the data. We used the preprocessed version of the dataset, in which the text is lowercased, punctuation is normalized, and the text is tokenized 3 . To reduce the vocabulary size of the joint models, we replace all words occurring fewer than four times with a spe-cial \"UNK\" symbol. Table 1 shows the overlap between the vocabularies of the translation portion of the Multi30K dataset. The total number of tokens across all four languages is 17,571, and taking the union of the tokens in these four languages results in vocabulary of 16,553 tokens -a 6% reduction in vocabulary size. On the comparable portion of the dataset, the total vocabulary between English and German contains 18,337 tokens, with a union of 17,667, which is a 4% reduction in vocabulary size.", |
| "cite_spans": [ |
| { |
| "start": 110, |
| "end": 131, |
| "text": "(Elliott et al., 2016", |
| "ref_id": "BIBREF14" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 446, |
| "end": 454, |
| "text": "Figure 1", |
| "ref_id": "FIGREF0" |
| }, |
| { |
| "start": 804, |
| "end": 811, |
| "text": "Table 1", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Experimental setup 2", |
| "sec_num": "4" |
| }, |
| { |
| "text": "Evaluation. We evaluate our models on the 1K images of the 2016 test set of Multi30K either using the 5K captions from the comparable data or the 1K translation pairs. We evaluate on image-to-text (I\u2192 T) and text-to-image (T\u2192 I) retrieval tasks.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experimental setup 2", |
| "sec_num": "4" |
| }, |
| { |
| "text": "For most experiments we report Recall at 1 (R@1), 5 (R@5) and 10 (R@10) scores averaged over 10 randomly initialised models. However, in Section 6 we only report R@10 due to space limitations and because it has less variance than R@1 or R@5.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experimental setup 2", |
| "sec_num": "4" |
| }, |
| { |
| "text": "5 Bilingual Experiments 5.1 Reproducing Gella et al. 2017We start by attempting to reproduce the findings of Gella et al. (2017) . In these experiments we train our multi-task learning model on the comparable portion of Multi30K. Our models reimplement their setups used for VSE (Monolingual) and bilingual models Pivot-Sym (Bilingual) and Parallel-Sym (Bilingual + c2c). The OE, Pivot-Asym and Parallel-Asym models are trained using the asymmetric similarity measure introduced for the order-embeddings (Vendrov et al., 2015) . The main differences between our models and Gella et al. (2017) is that they use VGG-19 image features, whereas we use ResNet50 features, and we use the max-of-hinges loss instead of the more common sum-of-hinges loss. Table 2 shows the results on the English comparable 2016 test set. Overall our scores are higher than Gella et al. (2017) , which is most likely due to the different image features (Faghri et al. (2017) also report a large performance gain when they use the ResNet instead of the VGG image features). Nevertheless, our results show a similar trend to the symmetric cosine similarity models from Gella et al. (2017) : our best results are achieved with bilingual joint training with the added c2c objective. Their models trained with an asymmetric similarity measure show a different trend: the monolingual model is stronger than the bilingual model, and the c2c loss provides no clear improvement. Table 3 presents the German results. Once again, our implementation outperforms Gella et al. (2017) , and this is likely due to the different visual features and max-of-hinges loss. However, our Bilingual model with the additional c2c objective performs the best for German, whereas Gella et al. (2017) reports the overall best results for the monolingual baseline VSE. Their models that use the asymmetric similarity function are clearly better than the Monolingual OE model. In general, the results from Gella et al. (2017) indicate the benefits of bilingual joint training, however, they do not find a clear pattern between the model configurations across languages. In our implementation, we only focused on the symmetric cosine similarity function and found a systematic pattern across both languages: bilingual training improves results on all performance metrics for both languages, and the additional c2c objective always provides further improvements.", |
| "cite_spans": [ |
| { |
| "start": 109, |
| "end": 128, |
| "text": "Gella et al. (2017)", |
| "ref_id": "BIBREF21" |
| }, |
| { |
| "start": 504, |
| "end": 526, |
| "text": "(Vendrov et al., 2015)", |
| "ref_id": "BIBREF49" |
| }, |
| { |
| "start": 573, |
| "end": 592, |
| "text": "Gella et al. (2017)", |
| "ref_id": "BIBREF21" |
| }, |
| { |
| "start": 850, |
| "end": 869, |
| "text": "Gella et al. (2017)", |
| "ref_id": "BIBREF21" |
| }, |
| { |
| "start": 929, |
| "end": 950, |
| "text": "(Faghri et al. (2017)", |
| "ref_id": "BIBREF16" |
| }, |
| { |
| "start": 1143, |
| "end": 1162, |
| "text": "Gella et al. (2017)", |
| "ref_id": "BIBREF21" |
| }, |
| { |
| "start": 1526, |
| "end": 1545, |
| "text": "Gella et al. (2017)", |
| "ref_id": "BIBREF21" |
| }, |
| { |
| "start": 1729, |
| "end": 1748, |
| "text": "Gella et al. (2017)", |
| "ref_id": "BIBREF21" |
| }, |
| { |
| "start": 1952, |
| "end": 1971, |
| "text": "Gella et al. (2017)", |
| "ref_id": "BIBREF21" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 748, |
| "end": 755, |
| "text": "Table 2", |
| "ref_id": "TABREF3" |
| }, |
| { |
| "start": 1446, |
| "end": 1453, |
| "text": "Table 3", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Experimental setup 2", |
| "sec_num": "4" |
| }, |
| { |
| "text": "We now study whether the model can be trained on either translation pairs or independently collected bilingual captions. Gella et al. (2017) only conducted experiments on independently collected captions. However, it is known that humans have equally strong preference for translated or independently collected captions of images (Frank et al., 2018) , which has implications for the difficulty and cost of collecting training data. Our baseline is a Monolingual model trained on 29K singlecaptioned images in the translation portion of Multi30K. The Bi-translation model is trained on both German and English, with shared parameters. Table 4 shows that there is a substantial improvement in performance for both languages in the bilingual setting. However, the additional c2c loss degrades performance here. This could be because we only have one caption per image in each language and it is easier to find a relationship between these views of the translation pairs.", |
| "cite_spans": [ |
| { |
| "start": 121, |
| "end": 140, |
| "text": "Gella et al. (2017)", |
| "ref_id": "BIBREF21" |
| }, |
| { |
| "start": 330, |
| "end": 350, |
| "text": "(Frank et al., 2018)", |
| "ref_id": "BIBREF18" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 635, |
| "end": 642, |
| "text": "Table 4", |
| "ref_id": "TABREF5" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Translations vs. independent captions", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "In the Bi-comparable setting, we randomly select an English and a German sentence for each image in the comparable portion of Multi30K. We only find a minor difference in performance between the Bi-translation and Bi-comparable models for English, but the German results are improved. Cru-", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Translations vs. independent captions", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "R@1 R@5 R@10 R@1 R@5 R@10 Table 3 : German Image-to-text (I\u2192T) and text-to-image (T\u2192I) retrieval results on the comparable part of Multi30K, measured by Recall at 1, 5 at 10. Typewriter font shows performance of two sets of symmetric and asymmetric models from Gella et al. (2017) .", |
| "cite_spans": [ |
| { |
| "start": 261, |
| "end": 280, |
| "text": "Gella et al. (2017)", |
| "ref_id": "BIBREF21" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 26, |
| "end": 33, |
| "text": "Table 3", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "I\u2192T T\u2192I", |
| "sec_num": null |
| }, |
| { |
| "text": "cially, it is still better than training on monolingual data. In the Bi-comparable setting, the c2c loss does not have a detrimental effect on model performance, unlike in the Bi-translation experiment. Overall we find that the comparable data leads to larger improvements in retrieval performance.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "I\u2192T T\u2192I", |
| "sec_num": null |
| }, |
| { |
| "text": "In a bilingual setting, we can improve an imagesentence ranking model by collecting more data in a second language. This can be achieved in two ways: by collecting captions in a new language for the same overlapping set of images, or by using a disjoint set of images and captions in a new language. We compare these two settings here. In the Bi-overlap condition, we collect captions for the existing images in a new language, i.e. we use all of the English and German captions paired with a random selection of 50% of the images in comparable Multi30K. This results in a training dataset of 14.5K images with 145K bilingual captions. In the Bi-disjoint condition, we collect captions for new images in a new language, i.e. we use all of the English captions from a random selection of 50% of the images, and all of the German captions for the remaining 50% of the images. This results in a training dataset on 29K images with a total of 145K bilingual captions. Table 5 shows the results of this experiment. The upper-bound is to train a Monolingual model on the full comparable corpus. For the lower bound, we train Half Monolingual models by randomly sampling half of the 29K images and their associated captions, giving 72.5K captions over 14.5K images. Unsurprisingly, the Half Monolingual models perform worse than the Full Monolingual models. In the Bi-overlap experiment, the German model is improved by collecting captions for the existing images in English. There is no difference in the performance of the English model, echoing the results from Section 5.1. The Bi-overlap model also benefits from the added c2c objective. Finally, the Bi-disjoint model performs as well as the Bioverlap model without the c2c objective. (It was not possible to train the Bi-disjoint model with the additional c2c objective because there are no caption pairs for the same image.)", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 964, |
| "end": 971, |
| "text": "Table 5", |
| "ref_id": "TABREF7" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Overlapping vs. non-overlapping images", |
| "sec_num": "5.3" |
| }, |
| { |
| "text": "Overall, these results suggest that it is best to collect additional captions in the original language, but when adding a second language, it is better to collect extra captions for existing images and exploit the additional c2c ranking objective. Bi-disjoint 73.1 62.1 67.9 54.9 ", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Overlapping vs. non-overlapping images", |
| "sec_num": "5.3" |
| }, |
| { |
| "text": "We now turn out attention to multilingual learning using the English, German, French and Czech annotations in the translation portion of Multi30K. We only report the text-to-image (T\u2192I) R@10 results due to space limitations. We did not repeat the overlapping vs. nonoverlapping experiments from Section 5.3 in a multilingual setting because this would introduce too much data sparsity. In order to conduct this experiment, we would have to downsample the already low-resource French and Czech captions by 50%, or even further for multi-way experiments. Table 6 shows the results of repeating the translations vs. comparable captions experiment from Section 5.2 with data in four languages. The Multi-translation models are trained on 29K images paired with a single caption in each language. These models perform better than their Monolingual counterparts, and the German, French, and Czech models are further improved with the c2c objective. The Multi-comparable models are trained by randomly sampling one English and one German caption from the comparable dataset, alongside the French and Czech translation pairs. These models perform as well as the Multi-translation models, and the c2c objective brings further improvements for all languages in this setting. These results clearly demonstrate the advantage of jointly training on more than two languages. Text-to-image retrieval performance increases by more than 11 R@10 points for each of the four languages in our experiment.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 553, |
| "end": 560, |
| "text": "Table 6", |
| "ref_id": "TABREF9" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Multilingual experiments", |
| "sec_num": "6" |
| }, |
| { |
| "text": "We now examine whether the lower-resource French and Czech models benefit from training with the full complement of the higher-resource English and German comparable data. Therefore we train a joint model on the translation as well as comparable portions of Multi30K, and examine the performance on French and Czech. Table 7 shows the results of this experiment. We find that the French and Czech models improve by 8.8 and 5.5 R@10 points respectively when they are only trained on the multilingual translation pairs (compared to the monolingual version), and by another 2.2 and 2.8 points if trained on the extra 155K English and German comparable descriptions. We also find that the additional c2c objective improves the Czech model by a further 4.8 R@10 points (this improvement is likely caused by training the model on 46 possible caption pairs). Our results show the impact of jointly training with the larger English and German resources, which demonstrates the benefits of high-to-low resource transfer.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 317, |
| "end": 324, |
| "text": "Table 7", |
| "ref_id": "TABREF11" |
| } |
| ], |
| "eq_spans": [], |
| "section": "High-to-low resource transfer", |
| "sec_num": "6.2" |
| }, |
| { |
| "text": "Finally, we investigate how useful it is to train on four languages instead of two. Figure 3 presents the image-to-text and text-to-image retrieval results of training Monolingual, Bilingual, or Multilingual models. The Monolingual and Bilingual models are trained on a random single-caption-image subsam- ple of the comparable dataset with the additional c2c objective, as this configuration provided the overall best results in Sections 5.2 and 6.1. The Multilingual models are trained with the additional French and Czech translation data. As can be seen in Figure 3 , the performance on both tasks and for both languages improves as we move from using data from one to two to four languages.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 84, |
| "end": 92, |
| "text": "Figure 3", |
| "ref_id": "FIGREF2" |
| }, |
| { |
| "start": 561, |
| "end": 569, |
| "text": "Figure 3", |
| "ref_id": "FIGREF2" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Bilingual vs. multilingual", |
| "sec_num": "6.3" |
| }, |
| { |
| "text": "We learn multilingual multimodal sentence embeddings and show that multilingual joint training improves over bilingual joint training. We also demonstrate that low-resource languages can benefit from the additional data found in high-resource languages. Our experiments suggest that either translation pairs or independently-collected captions improve the performance of a multilingual model, and that the latter data setting provides further improvements through a caption-caption ranking objective. We also show that when collecting data in an additional language, it is better to collect captions for the existing images because we can exploit the caption-caption objective. Our results lead to several directions for future work. We would like to pin down the mechanism via which multilingual training contributes to improved performance for image-sentence ranking. Additionally, we only consider four languages and show the gain of multilingual over bilingual training only for the English-German language pair. In future work we will incorporate more languages from data sets such as the Chinese Flickr8K (Li et al., 2016) or Japanese COCO (Miyazaki and Shimizu, 2016) . ", |
| "cite_spans": [ |
| { |
| "start": 1111, |
| "end": 1128, |
| "text": "(Li et al., 2016)", |
| "ref_id": "BIBREF36" |
| }, |
| { |
| "start": 1146, |
| "end": 1174, |
| "text": "(Miyazaki and Shimizu, 2016)", |
| "ref_id": "BIBREF41" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusions", |
| "sec_num": "7" |
| }, |
| { |
| "text": "Gloss: Three men and two women with a South-East Asian appearance eat out of bowls at a black table, on which there are, among other things, paper cups and a bag; in the background there are other people and tables.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "Code to reproduce our results is available at https://github.com/kadarakos/mulisera.3 https://github.com/multi30k/dataset", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| } |
| ], |
| "back_matter": [ |
| { |
| "text": "Desmond Elliott was supported by an Amazon Research Award.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Acknowledgements", |
| "sec_num": null |
| } |
| ], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "Semeval-2015 task 2: Semantic textual similarity, english, spanish and pilot on interpretability", |
| "authors": [ |
| { |
| "first": "E", |
| "middle": [], |
| "last": "Agirre", |
| "suffix": "" |
| }, |
| { |
| "first": "C", |
| "middle": [], |
| "last": "Banea", |
| "suffix": "" |
| }, |
| { |
| "first": "C", |
| "middle": [], |
| "last": "Cardie", |
| "suffix": "" |
| }, |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Cer", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Diab", |
| "suffix": "" |
| }, |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Gonzalez-Agirre", |
| "suffix": "" |
| }, |
| { |
| "first": "W", |
| "middle": [], |
| "last": "Guo", |
| "suffix": "" |
| }, |
| { |
| "first": "I", |
| "middle": [], |
| "last": "Lopez-Gazpio", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Maritxalar", |
| "suffix": "" |
| }, |
| { |
| "first": "R", |
| "middle": [], |
| "last": "Mihalcea", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Proceedings of the 9th international workshop on semantic evaluation", |
| "volume": "", |
| "issue": "", |
| "pages": "252--263", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Agirre, E., Banea, C., Cardie, C., Cer, D., Diab, M., Gonzalez-Agirre, A., Guo, W., Lopez-Gazpio, I., Maritxalar, M., Mihalcea, R., et al. (2015). Semeval- 2015 task 2: Semantic textual similarity, english, spanish and pilot on interpretability. In Proceedings of the 9th international workshop on semantic eval- uation (SemEval 2015), pages 252-263.", |
| "links": null |
| }, |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "Semeval-2014 task 10: Multilingual semantic textual similarity", |
| "authors": [ |
| { |
| "first": "E", |
| "middle": [], |
| "last": "Agirre", |
| "suffix": "" |
| }, |
| { |
| "first": "C", |
| "middle": [], |
| "last": "Banea", |
| "suffix": "" |
| }, |
| { |
| "first": "C", |
| "middle": [], |
| "last": "Cardie", |
| "suffix": "" |
| }, |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Cer", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Diab", |
| "suffix": "" |
| }, |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Gonzalez-Agirre", |
| "suffix": "" |
| }, |
| { |
| "first": "W", |
| "middle": [], |
| "last": "Guo", |
| "suffix": "" |
| }, |
| { |
| "first": "R", |
| "middle": [], |
| "last": "Mihalcea", |
| "suffix": "" |
| }, |
| { |
| "first": "G", |
| "middle": [], |
| "last": "Rigau", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Wiebe", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Proceedings of the 8th international workshop on semantic evaluation", |
| "volume": "", |
| "issue": "", |
| "pages": "81--91", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Agirre, E., Banea, C., Cardie, C., Cer, D., Diab, M., Gonzalez-Agirre, A., Guo, W., Mihalcea, R., Rigau, G., and Wiebe, J. (2014). Semeval-2014 task 10: Multilingual semantic textual similarity. In Proceed- ings of the 8th international workshop on semantic evaluation (SemEval 2014), pages 81-91.", |
| "links": null |
| }, |
| "BIBREF2": { |
| "ref_id": "b2", |
| "title": "Many languages, one parser", |
| "authors": [ |
| { |
| "first": "W", |
| "middle": [], |
| "last": "Ammar", |
| "suffix": "" |
| }, |
| { |
| "first": "G", |
| "middle": [], |
| "last": "Mulcaire", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Ballesteros", |
| "suffix": "" |
| }, |
| { |
| "first": "C", |
| "middle": [], |
| "last": "Dyer", |
| "suffix": "" |
| }, |
| { |
| "first": "N", |
| "middle": [ |
| "A" |
| ], |
| "last": "Smith", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXiv:1602.01595" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ammar, W., Mulcaire, G., Ballesteros, M., Dyer, C., and Smith, N. A. (2016). Many languages, one parser. arXiv preprint arXiv:1602.01595.", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "Vqa: Visual question answering", |
| "authors": [ |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Antol", |
| "suffix": "" |
| }, |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Agrawal", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Lu", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Mitchell", |
| "suffix": "" |
| }, |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Batra", |
| "suffix": "" |
| }, |
| { |
| "first": "C", |
| "middle": [], |
| "last": "Lawrence Zitnick", |
| "suffix": "" |
| }, |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Parikh", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Proceedings of the IEEE International Conference on Computer Vision", |
| "volume": "", |
| "issue": "", |
| "pages": "2425--2433", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Antol, S., Agrawal, A., Lu, J., Mitchell, M., Batra, D., Lawrence Zitnick, C., and Parikh, D. (2015). Vqa: Visual question answering. In Proceedings of the IEEE International Conference on Computer Vision, pages 2425-2433.", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "Code mixing: A challenge for language identification in the language of social media", |
| "authors": [ |
| { |
| "first": "U", |
| "middle": [], |
| "last": "Barman", |
| "suffix": "" |
| }, |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Das", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Wagner", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Foster", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Proceedings of the first workshop on computational approaches to code switching", |
| "volume": "", |
| "issue": "", |
| "pages": "13--23", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Barman, U., Das, A., Wagner, J., and Foster, J. (2014). Code mixing: A challenge for language identifica- tion in the language of social media. In Proceedings of the first workshop on computational approaches to code switching, pages 13-23.", |
| "links": null |
| }, |
| "BIBREF5": { |
| "ref_id": "b5", |
| "title": "Grounding distributional semantics in the visual world. Language and Linguistics Compass", |
| "authors": [ |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Baroni", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "", |
| "volume": "10", |
| "issue": "", |
| "pages": "3--13", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Baroni, M. (2016). Grounding distributional semantics in the visual world. Language and Linguistics Com- pass, 10(1):3-13.", |
| "links": null |
| }, |
| "BIBREF6": { |
| "ref_id": "b6", |
| "title": "Grounding conceptual knowledge in modality-specific systems", |
| "authors": [ |
| { |
| "first": "L", |
| "middle": [ |
| "W" |
| ], |
| "last": "Barsalou", |
| "suffix": "" |
| }, |
| { |
| "first": "W", |
| "middle": [ |
| "K" |
| ], |
| "last": "Simmons", |
| "suffix": "" |
| }, |
| { |
| "first": "A", |
| "middle": [ |
| "K" |
| ], |
| "last": "Barbey", |
| "suffix": "" |
| }, |
| { |
| "first": "C", |
| "middle": [ |
| "D" |
| ], |
| "last": "Wilson", |
| "suffix": "" |
| } |
| ], |
| "year": 2003, |
| "venue": "Trends in cognitive sciences", |
| "volume": "7", |
| "issue": "", |
| "pages": "84--91", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Barsalou, L. W., Simmons, W. K., Barbey, A. K., and Wilson, C. D. (2003). Grounding conceptual knowl- edge in modality-specific systems. Trends in cogni- tive sciences, 7(2):84-91.", |
| "links": null |
| }, |
| "BIBREF7": { |
| "ref_id": "b7", |
| "title": "Multimodal distributional semantics", |
| "authors": [ |
| { |
| "first": "E", |
| "middle": [], |
| "last": "Bruni", |
| "suffix": "" |
| }, |
| { |
| "first": "N.-K", |
| "middle": [], |
| "last": "Tran", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Baroni", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Journal of Artificial Intelligence Research", |
| "volume": "49", |
| "issue": "", |
| "pages": "1--47", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Bruni, E., Tran, N.-K., and Baroni, M. (2014). Multi- modal distributional semantics. Journal of Artificial Intelligence Research, 49:1-47.", |
| "links": null |
| }, |
| "BIBREF8": { |
| "ref_id": "b8", |
| "title": "Multilingual multi-modal embeddings for natural language processing", |
| "authors": [ |
| { |
| "first": "I", |
| "middle": [], |
| "last": "Calixto", |
| "suffix": "" |
| }, |
| { |
| "first": "Q", |
| "middle": [], |
| "last": "Liu", |
| "suffix": "" |
| }, |
| { |
| "first": "N", |
| "middle": [], |
| "last": "Campbell", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXiv:1702.01101" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Calixto, I., Liu, Q., and Campbell, N. (2017). Multilin- gual multi-modal embeddings for natural language processing. arXiv preprint arXiv:1702.01101.", |
| "links": null |
| }, |
| "BIBREF9": { |
| "ref_id": "b9", |
| "title": "A character-level decoder without explicit segmentation for neural machine translation", |
| "authors": [ |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Chung", |
| "suffix": "" |
| }, |
| { |
| "first": "K", |
| "middle": [], |
| "last": "Cho", |
| "suffix": "" |
| }, |
| { |
| "first": "Y", |
| "middle": [], |
| "last": "Bengio", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXiv:1603.06147" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Chung, J., Cho, K., and Bengio, Y. (2016). A character-level decoder without explicit segmenta- tion for neural machine translation. arXiv preprint arXiv:1603.06147.", |
| "links": null |
| }, |
| "BIBREF10": { |
| "ref_id": "b10", |
| "title": "Empirical evaluation of gated recurrent neural networks on sequence modeling", |
| "authors": [ |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Chung", |
| "suffix": "" |
| }, |
| { |
| "first": "C", |
| "middle": [], |
| "last": "Gulcehre", |
| "suffix": "" |
| }, |
| { |
| "first": "K", |
| "middle": [], |
| "last": "Cho", |
| "suffix": "" |
| }, |
| { |
| "first": "Y", |
| "middle": [], |
| "last": "Bengio", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXiv:1412.3555" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Chung, J., Gulcehre, C., Cho, K., and Bengio, Y. (2014). Empirical evaluation of gated recurrent neu- ral networks on sequence modeling. arXiv preprint arXiv:1412.3555.", |
| "links": null |
| }, |
| "BIBREF11": { |
| "ref_id": "b11", |
| "title": "Imagenet: A large-scale hierarchical image database", |
| "authors": [ |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Deng", |
| "suffix": "" |
| }, |
| { |
| "first": "W", |
| "middle": [], |
| "last": "Dong", |
| "suffix": "" |
| }, |
| { |
| "first": "R", |
| "middle": [], |
| "last": "Socher", |
| "suffix": "" |
| }, |
| { |
| "first": "L.-J", |
| "middle": [], |
| "last": "Li", |
| "suffix": "" |
| }, |
| { |
| "first": "K", |
| "middle": [], |
| "last": "Li", |
| "suffix": "" |
| }, |
| { |
| "first": "L", |
| "middle": [], |
| "last": "Fei-Fei", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "Computer Vision and Pattern Recognition", |
| "volume": "", |
| "issue": "", |
| "pages": "248--255", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., and Fei-Fei, L. (2009). Imagenet: A large-scale hier- archical image database. In Computer Vision and Pattern Recognition, 2009. CVPR 2009. IEEE Con- ference on, pages 248-255. IEEE.", |
| "links": null |
| }, |
| "BIBREF12": { |
| "ref_id": "b12", |
| "title": "Unsupervised construction of large paraphrase corpora: Exploiting massively parallel news sources", |
| "authors": [ |
| { |
| "first": "B", |
| "middle": [], |
| "last": "Dolan", |
| "suffix": "" |
| }, |
| { |
| "first": "C", |
| "middle": [], |
| "last": "Quirk", |
| "suffix": "" |
| }, |
| { |
| "first": "C", |
| "middle": [], |
| "last": "Brockett", |
| "suffix": "" |
| } |
| ], |
| "year": 2004, |
| "venue": "Proceedings of the 20th international conference on Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Dolan, B., Quirk, C., and Brockett, C. (2004). Un- supervised construction of large paraphrase corpora: Exploiting massively parallel news sources. In Pro- ceedings of the 20th international conference on Computational Linguistics, page 350. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF13": { |
| "ref_id": "b13", |
| "title": "Findings of the second shared task on multimodal machine translation and multilingual image description", |
| "authors": [ |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Elliott", |
| "suffix": "" |
| }, |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Frank", |
| "suffix": "" |
| }, |
| { |
| "first": "L", |
| "middle": [], |
| "last": "Barrault", |
| "suffix": "" |
| }, |
| { |
| "first": "F", |
| "middle": [], |
| "last": "Bougares", |
| "suffix": "" |
| }, |
| { |
| "first": "L", |
| "middle": [], |
| "last": "Specia", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "Proceedings of the Second Conference on Machine Translation", |
| "volume": "2", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Elliott, D., Frank, S., Barrault, L., Bougares, F., and Specia, L. (2017). Findings of the second shared task on multimodal machine translation and multi- lingual image description. In Proceedings of the Sec- ond Conference on Machine Translation, Volume 2: Shared Task Papers.", |
| "links": null |
| }, |
| "BIBREF14": { |
| "ref_id": "b14", |
| "title": "Multi30k: Multilingual english-german image descriptions", |
| "authors": [ |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Elliott", |
| "suffix": "" |
| }, |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Frank", |
| "suffix": "" |
| }, |
| { |
| "first": "K", |
| "middle": [], |
| "last": "Sima'an", |
| "suffix": "" |
| }, |
| { |
| "first": "L", |
| "middle": [], |
| "last": "Specia", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXiv:1605.00459" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Elliott, D., Frank, S., Sima'an, K., and Specia, L. (2016). Multi30k: Multilingual english-german im- age descriptions. arXiv preprint arXiv:1605.00459.", |
| "links": null |
| }, |
| "BIBREF15": { |
| "ref_id": "b15", |
| "title": "Imagination improves multimodal translation", |
| "authors": [ |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Elliott", |
| "suffix": "" |
| }, |
| { |
| "first": "A", |
| "middle": [], |
| "last": "K\u00e1d\u00e1r", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXiv:1705.04350" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Elliott, D. and K\u00e1d\u00e1r, A. (2017). Imagination im- proves multimodal translation. arXiv preprint arXiv:1705.04350.", |
| "links": null |
| }, |
| "BIBREF16": { |
| "ref_id": "b16", |
| "title": "Vse++: Improved visual-semantic embeddings", |
| "authors": [ |
| { |
| "first": "F", |
| "middle": [], |
| "last": "Faghri", |
| "suffix": "" |
| }, |
| { |
| "first": "D", |
| "middle": [ |
| "J" |
| ], |
| "last": "Fleet", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [ |
| "R" |
| ], |
| "last": "Kiros", |
| "suffix": "" |
| }, |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Fidler", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXiv:1707.05612" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Faghri, F., Fleet, D. J., Kiros, J. R., and Fidler, S. (2017). Vse++: Improved visual-semantic embed- dings. arXiv preprint arXiv:1707.05612.", |
| "links": null |
| }, |
| "BIBREF17": { |
| "ref_id": "b17", |
| "title": "Placing search in context: The concept revisited", |
| "authors": [ |
| { |
| "first": "L", |
| "middle": [], |
| "last": "Finkelstein", |
| "suffix": "" |
| }, |
| { |
| "first": "E", |
| "middle": [], |
| "last": "Gabrilovich", |
| "suffix": "" |
| }, |
| { |
| "first": "Y", |
| "middle": [], |
| "last": "Matias", |
| "suffix": "" |
| }, |
| { |
| "first": "E", |
| "middle": [], |
| "last": "Rivlin", |
| "suffix": "" |
| }, |
| { |
| "first": "Z", |
| "middle": [], |
| "last": "Solan", |
| "suffix": "" |
| }, |
| { |
| "first": "G", |
| "middle": [], |
| "last": "Wolfman", |
| "suffix": "" |
| }, |
| { |
| "first": "E", |
| "middle": [], |
| "last": "Ruppin", |
| "suffix": "" |
| } |
| ], |
| "year": 2001, |
| "venue": "Proceedings of the 10th international conference on World Wide Web", |
| "volume": "", |
| "issue": "", |
| "pages": "406--414", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Finkelstein, L., Gabrilovich, E., Matias, Y., Rivlin, E., Solan, Z., Wolfman, G., and Ruppin, E. (2001). Placing search in context: The concept revisited. In Proceedings of the 10th international conference on World Wide Web, pages 406-414. ACM.", |
| "links": null |
| }, |
| "BIBREF18": { |
| "ref_id": "b18", |
| "title": "Assessing multilingual multimodal image description: Studies of native speaker preferences and translator choices", |
| "authors": [ |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Frank", |
| "suffix": "" |
| }, |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Elliott", |
| "suffix": "" |
| }, |
| { |
| "first": "L", |
| "middle": [], |
| "last": "Specia", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "Natural Language Engineering", |
| "volume": "24", |
| "issue": "3", |
| "pages": "393--413", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Frank, S., Elliott, D., and Specia, L. (2018). Assessing multilingual multimodal image description: Studies of native speaker preferences and translator choices. Natural Language Engineering, 24(3):393-413.", |
| "links": null |
| }, |
| "BIBREF19": { |
| "ref_id": "b19", |
| "title": "Multimodal compact bilinear pooling for visual question answering and visual grounding", |
| "authors": [ |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Fukui", |
| "suffix": "" |
| }, |
| { |
| "first": "D", |
| "middle": [ |
| "H" |
| ], |
| "last": "Park", |
| "suffix": "" |
| }, |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Yang", |
| "suffix": "" |
| }, |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Rohrbach", |
| "suffix": "" |
| }, |
| { |
| "first": "T", |
| "middle": [], |
| "last": "Darrell", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Rohrbach", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXiv:1606.01847" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Fukui, A., Park, D. H., Yang, D., Rohrbach, A., Darrell, T., and Rohrbach, M. (2016). Multimodal compact bilinear pooling for visual question answering and visual grounding. arXiv preprint arXiv:1606.01847.", |
| "links": null |
| }, |
| "BIBREF20": { |
| "ref_id": "b20", |
| "title": "Image-mediated learning for zero-shot cross-lingual document retrieval", |
| "authors": [ |
| { |
| "first": "R", |
| "middle": [], |
| "last": "Funaki", |
| "suffix": "" |
| }, |
| { |
| "first": "H", |
| "middle": [], |
| "last": "Nakayama", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "585--590", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Funaki, R. and Nakayama, H. (2015). Image-mediated learning for zero-shot cross-lingual document re- trieval. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 585-590.", |
| "links": null |
| }, |
| "BIBREF21": { |
| "ref_id": "b21", |
| "title": "Image pivoting for learning multilingual multimodal representations", |
| "authors": [ |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Gella", |
| "suffix": "" |
| }, |
| { |
| "first": "R", |
| "middle": [], |
| "last": "Sennrich", |
| "suffix": "" |
| }, |
| { |
| "first": "F", |
| "middle": [], |
| "last": "Keller", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Lapata", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXiv:1707.07601" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Gella, S., Sennrich, R., Keller, F., and Lapata, M. (2017). Image pivoting for learning multilin- gual multimodal representations. arXiv preprint arXiv:1707.07601.", |
| "links": null |
| }, |
| "BIBREF22": { |
| "ref_id": "b22", |
| "title": "Vision as an interlingua: Learning multilingual semantic embeddings of untranscribed speech", |
| "authors": [ |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Harwath", |
| "suffix": "" |
| }, |
| { |
| "first": "G", |
| "middle": [], |
| "last": "Chuang", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Glass", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXiv:1804.03052" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Harwath, D., Chuang, G., and Glass, J. (2018). Vision as an interlingua: Learning multilingual semantic embeddings of untranscribed speech. arXiv preprint arXiv:1804.03052.", |
| "links": null |
| }, |
| "BIBREF23": { |
| "ref_id": "b23", |
| "title": "Deep residual learning for image recognition", |
| "authors": [ |
| { |
| "first": "K", |
| "middle": [], |
| "last": "He", |
| "suffix": "" |
| }, |
| { |
| "first": "X", |
| "middle": [], |
| "last": "Zhang", |
| "suffix": "" |
| }, |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Ren", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Sun", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "Proceedings of the IEEE conference on computer vision and pattern recognition", |
| "volume": "", |
| "issue": "", |
| "pages": "770--778", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "He, K., Zhang, X., Ren, S., and Sun, J. (2016). Deep residual learning for image recognition. In Proceed- ings of the IEEE conference on computer vision and pattern recognition, pages 770-778.", |
| "links": null |
| }, |
| "BIBREF24": { |
| "ref_id": "b24", |
| "title": "Simlex-999: Evaluating semantic models with (genuine) similarity estimation", |
| "authors": [ |
| { |
| "first": "F", |
| "middle": [], |
| "last": "Hill", |
| "suffix": "" |
| }, |
| { |
| "first": "R", |
| "middle": [], |
| "last": "Reichart", |
| "suffix": "" |
| }, |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Korhonen", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Computational Linguistics", |
| "volume": "41", |
| "issue": "4", |
| "pages": "665--695", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Hill, F., Reichart, R., and Korhonen, A. (2015). Simlex- 999: Evaluating semantic models with (genuine) similarity estimation. Computational Linguistics, 41(4):665-695.", |
| "links": null |
| }, |
| "BIBREF25": { |
| "ref_id": "b25", |
| "title": "Revisiting visual question answering baselines", |
| "authors": [ |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Jabri", |
| "suffix": "" |
| }, |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Joulin", |
| "suffix": "" |
| }, |
| { |
| "first": "L", |
| "middle": [], |
| "last": "Van Der Maaten", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "European conference on computer vision", |
| "volume": "", |
| "issue": "", |
| "pages": "727--739", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jabri, A., Joulin, A., and van der Maaten, L. (2016). Revisiting visual question answering baselines. In European conference on computer vision, pages 727-739. Springer.", |
| "links": null |
| }, |
| "BIBREF26": { |
| "ref_id": "b26", |
| "title": "Google's multilingual neural machine translation system: enabling zeroshot translation", |
| "authors": [ |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Johnson", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Schuster", |
| "suffix": "" |
| }, |
| { |
| "first": "Q", |
| "middle": [ |
| "V" |
| ], |
| "last": "Le", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Krikun", |
| "suffix": "" |
| }, |
| { |
| "first": "Y", |
| "middle": [], |
| "last": "Wu", |
| "suffix": "" |
| }, |
| { |
| "first": "Z", |
| "middle": [], |
| "last": "Chen", |
| "suffix": "" |
| }, |
| { |
| "first": "N", |
| "middle": [], |
| "last": "Thorat", |
| "suffix": "" |
| }, |
| { |
| "first": "F", |
| "middle": [], |
| "last": "Vi\u00e9gas", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Wattenberg", |
| "suffix": "" |
| }, |
| { |
| "first": "G", |
| "middle": [], |
| "last": "Corrado", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXiv:1611.04558" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Johnson, M., Schuster, M., Le, Q. V., Krikun, M., Wu, Y., Chen, Z., Thorat, N., Vi\u00e9gas, F., Wattenberg, M., Corrado, G., et al. (2016). Google's multilingual neural machine translation system: enabling zero- shot translation. arXiv preprint arXiv:1611.04558.", |
| "links": null |
| }, |
| "BIBREF27": { |
| "ref_id": "b27", |
| "title": "Learning word meanings from images of natural scenes", |
| "authors": [ |
| { |
| "first": "\u00c1", |
| "middle": [], |
| "last": "K\u00e1d\u00e1r", |
| "suffix": "" |
| }, |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Alishahi", |
| "suffix": "" |
| }, |
| { |
| "first": "G", |
| "middle": [], |
| "last": "Chrupa\u0142a", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Traitement Automatique des Langues", |
| "volume": "55", |
| "issue": "3", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "K\u00e1d\u00e1r, \u00c1., Alishahi, A., and Chrupa\u0142a, G. (2015). Learning word meanings from images of natural scenes. Traitement Automatique des Langues, 55(3).", |
| "links": null |
| }, |
| "BIBREF28": { |
| "ref_id": "b28", |
| "title": "Deep visualsemantic alignments for generating image descriptions", |
| "authors": [ |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Karpathy", |
| "suffix": "" |
| }, |
| { |
| "first": "L", |
| "middle": [], |
| "last": "Fei-Fei", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Proceedings of the IEEE conference on computer vision and pattern recognition", |
| "volume": "", |
| "issue": "", |
| "pages": "3128--3137", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Karpathy, A. and Fei-Fei, L. (2015). Deep visual- semantic alignments for generating image descrip- tions. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 3128-3137.", |
| "links": null |
| }, |
| "BIBREF29": { |
| "ref_id": "b29", |
| "title": "Learning image embeddings using convolutional neural networks for improved multi-modal semantics", |
| "authors": [ |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Kiela", |
| "suffix": "" |
| }, |
| { |
| "first": "L", |
| "middle": [], |
| "last": "Bottou", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)", |
| "volume": "", |
| "issue": "", |
| "pages": "36--45", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Kiela, D. and Bottou, L. (2014). Learning image em- beddings using convolutional neural networks for improved multi-modal semantics. In Proceedings of the 2014 Conference on Empirical Methods in Natu- ral Language Processing (EMNLP), pages 36-45.", |
| "links": null |
| }, |
| "BIBREF30": { |
| "ref_id": "b30", |
| "title": "Learning visually grounded sentence representations", |
| "authors": [ |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Kiela", |
| "suffix": "" |
| }, |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Conneau", |
| "suffix": "" |
| }, |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Jabri", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Nickel", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXiv:1707.06320" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Kiela, D., Conneau, A., Jabri, A., and Nickel, M. (2017). Learning visually grounded sentence repre- sentations. arXiv preprint arXiv:1707.06320.", |
| "links": null |
| }, |
| "BIBREF31": { |
| "ref_id": "b31", |
| "title": "Improving multi-modal representations using image dispersion: Why less is sometimes more", |
| "authors": [ |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Kiela", |
| "suffix": "" |
| }, |
| { |
| "first": "F", |
| "middle": [], |
| "last": "Hill", |
| "suffix": "" |
| }, |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Korhonen", |
| "suffix": "" |
| }, |
| { |
| "first": "Clark", |
| "middle": [], |
| "last": "", |
| "suffix": "" |
| }, |
| { |
| "first": "S", |
| "middle": [], |
| "last": "", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics", |
| "volume": "2", |
| "issue": "", |
| "pages": "835--841", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Kiela, D., Hill, F., Korhonen, A., and Clark, S. (2014). Improving multi-modal representations using image dispersion: Why less is sometimes more. In Pro- ceedings of the 52nd Annual Meeting of the Associa- tion for Computational Linguistics (Volume 2: Short Papers), volume 2, pages 835-841.", |
| "links": null |
| }, |
| "BIBREF32": { |
| "ref_id": "b32", |
| "title": "Adam: A method for stochastic optimization", |
| "authors": [ |
| { |
| "first": "D", |
| "middle": [ |
| "P" |
| ], |
| "last": "Kingma", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Ba", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXiv:1412.6980" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Kingma, D. P. and Ba, J. (2014). Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980.", |
| "links": null |
| }, |
| "BIBREF33": { |
| "ref_id": "b33", |
| "title": "Unifying visual-semantic embeddings with multimodal neural language models", |
| "authors": [ |
| { |
| "first": "R", |
| "middle": [], |
| "last": "Kiros", |
| "suffix": "" |
| }, |
| { |
| "first": "R", |
| "middle": [], |
| "last": "Salakhutdinov", |
| "suffix": "" |
| }, |
| { |
| "first": "R", |
| "middle": [ |
| "S" |
| ], |
| "last": "Zemel", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXiv:1411.2539" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Kiros, R., Salakhutdinov, R., and Zemel, R. S. (2014). Unifying visual-semantic embeddings with multi- modal neural language models. arXiv preprint arXiv:1411.2539.", |
| "links": null |
| }, |
| "BIBREF34": { |
| "ref_id": "b34", |
| "title": "Skipthought vectors", |
| "authors": [ |
| { |
| "first": "R", |
| "middle": [], |
| "last": "Kiros", |
| "suffix": "" |
| }, |
| { |
| "first": "Y", |
| "middle": [], |
| "last": "Zhu", |
| "suffix": "" |
| }, |
| { |
| "first": "R", |
| "middle": [ |
| "R" |
| ], |
| "last": "Salakhutdinov", |
| "suffix": "" |
| }, |
| { |
| "first": "R", |
| "middle": [], |
| "last": "Zemel", |
| "suffix": "" |
| }, |
| { |
| "first": "R", |
| "middle": [], |
| "last": "Urtasun", |
| "suffix": "" |
| }, |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Torralba", |
| "suffix": "" |
| }, |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Fidler", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Advances in neural information processing systems", |
| "volume": "", |
| "issue": "", |
| "pages": "3294--3302", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Kiros, R., Zhu, Y., Salakhutdinov, R. R., Zemel, R., Ur- tasun, R., Torralba, A., and Fidler, S. (2015). Skip- thought vectors. In Advances in neural information processing systems, pages 3294-3302.", |
| "links": null |
| }, |
| "BIBREF35": { |
| "ref_id": "b35", |
| "title": "Fully character-level neural machine translation without explicit segmentation", |
| "authors": [ |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Lee", |
| "suffix": "" |
| }, |
| { |
| "first": "K", |
| "middle": [], |
| "last": "Cho", |
| "suffix": "" |
| }, |
| { |
| "first": "T", |
| "middle": [], |
| "last": "Hofmann", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "Transactions of the Association for Computational Linguistics", |
| "volume": "5", |
| "issue": "", |
| "pages": "365--378", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Lee, J., Cho, K., and Hofmann, T. (2017). Fully character-level neural machine translation without explicit segmentation. Transactions of the Associ- ation for Computational Linguistics, 5:365-378.", |
| "links": null |
| }, |
| "BIBREF36": { |
| "ref_id": "b36", |
| "title": "Adding chinese captions to images", |
| "authors": [ |
| { |
| "first": "X", |
| "middle": [], |
| "last": "Li", |
| "suffix": "" |
| }, |
| { |
| "first": "W", |
| "middle": [], |
| "last": "Lan", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Dong", |
| "suffix": "" |
| }, |
| { |
| "first": "H", |
| "middle": [], |
| "last": "Liu", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "Proceedings of the 2016 ACM on International Conference on Multimedia Retrieval", |
| "volume": "", |
| "issue": "", |
| "pages": "271--275", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Li, X., Lan, W., Dong, J., and Liu, H. (2016). Adding chinese captions to images. In Proceedings of the 2016 ACM on International Conference on Multime- dia Retrieval, pages 271-275. ACM.", |
| "links": null |
| }, |
| "BIBREF37": { |
| "ref_id": "b37", |
| "title": "Attention strategies for multi-source sequence-to-sequence learning", |
| "authors": [ |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Libovicky", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Helcl", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics", |
| "volume": "2", |
| "issue": "", |
| "pages": "196--202", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Libovicky, J. and Helcl, J. (2017). Attention strate- gies for multi-source sequence-to-sequence learning. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 196-202, Stroudsburg, PA, USA. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF38": { |
| "ref_id": "b38", |
| "title": "Learning language representations for typology prediction", |
| "authors": [ |
| { |
| "first": "C", |
| "middle": [], |
| "last": "Malaviya", |
| "suffix": "" |
| }, |
| { |
| "first": "G", |
| "middle": [], |
| "last": "Neubig", |
| "suffix": "" |
| }, |
| { |
| "first": "P", |
| "middle": [], |
| "last": "Littell", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXiv:1707.09569" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Malaviya, C., Neubig, G., and Littell, P. (2017). Learn- ing language representations for typology prediction. arXiv preprint arXiv:1707.09569.", |
| "links": null |
| }, |
| "BIBREF39": { |
| "ref_id": "b39", |
| "title": "Deep captioning with multimodal recurrent neural networks (m-rnn)", |
| "authors": [ |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Mao", |
| "suffix": "" |
| }, |
| { |
| "first": "W", |
| "middle": [], |
| "last": "Xu", |
| "suffix": "" |
| }, |
| { |
| "first": "Y", |
| "middle": [], |
| "last": "Yang", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Wang", |
| "suffix": "" |
| }, |
| { |
| "first": "Z", |
| "middle": [], |
| "last": "Huang", |
| "suffix": "" |
| }, |
| { |
| "first": "Yuille", |
| "middle": [], |
| "last": "", |
| "suffix": "" |
| }, |
| { |
| "first": "A", |
| "middle": [], |
| "last": "", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXiv:1412.6632" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Mao, J., Xu, W., Yang, Y., Wang, J., Huang, Z., and Yuille, A. (2014). Deep captioning with multimodal recurrent neural networks (m-rnn). arXiv preprint arXiv:1412.6632.", |
| "links": null |
| }, |
| "BIBREF40": { |
| "ref_id": "b40", |
| "title": "A sick cure for the evaluation of compositional distributional semantic models", |
| "authors": [ |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Marelli", |
| "suffix": "" |
| }, |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Menini", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Baroni", |
| "suffix": "" |
| }, |
| { |
| "first": "L", |
| "middle": [], |
| "last": "Bentivogli", |
| "suffix": "" |
| }, |
| { |
| "first": "R", |
| "middle": [], |
| "last": "Bernardi", |
| "suffix": "" |
| }, |
| { |
| "first": "R", |
| "middle": [], |
| "last": "Zamparelli", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "LREC", |
| "volume": "", |
| "issue": "", |
| "pages": "216--223", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Marelli, M., Menini, S., Baroni, M., Bentivogli, L., Bernardi, R., Zamparelli, R., et al. (2014). A sick cure for the evaluation of compositional distribu- tional semantic models. In LREC, pages 216-223.", |
| "links": null |
| }, |
| "BIBREF41": { |
| "ref_id": "b41", |
| "title": "Cross-lingual image caption generation", |
| "authors": [ |
| { |
| "first": "T", |
| "middle": [], |
| "last": "Miyazaki", |
| "suffix": "" |
| }, |
| { |
| "first": "N", |
| "middle": [], |
| "last": "Shimizu", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics", |
| "volume": "1", |
| "issue": "", |
| "pages": "1780--1790", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Miyazaki, T. and Shimizu, N. (2016). Cross-lingual image caption generation. In Proceedings of the 54th Annual Meeting of the Association for Compu- tational Linguistics (Volume 1: Long Papers), pages 1780-1790. Association for Computational Linguis- tics.", |
| "links": null |
| }, |
| "BIBREF42": { |
| "ref_id": "b42", |
| "title": "Zero-resource machine translation by multimodal encoder-decoder network with multimedia pivot. Machine Translation", |
| "authors": [ |
| { |
| "first": "H", |
| "middle": [], |
| "last": "Nakayama", |
| "suffix": "" |
| }, |
| { |
| "first": "N", |
| "middle": [], |
| "last": "Nishida", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "", |
| "volume": "31", |
| "issue": "", |
| "pages": "49--64", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Nakayama, H. and Nishida, N. (2017). Zero-resource machine translation by multimodal encoder-decoder network with multimedia pivot. Machine Transla- tion, 31(1-2):49-64.", |
| "links": null |
| }, |
| "BIBREF44": { |
| "ref_id": "b44", |
| "title": "Continuous multilinguality with language vectors", |
| "authors": [ |
| { |
| "first": "R", |
| "middle": [], |
| "last": "\u00d6stling", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Tiedemann", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXiv:1612.07486" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "\u00d6stling, R. and Tiedemann, J. (2016). Continuous mul- tilinguality with language vectors. arXiv preprint arXiv:1612.07486.", |
| "links": null |
| }, |
| "BIBREF45": { |
| "ref_id": "b45", |
| "title": "Massively multilingual neural grapheme-tophoneme conversion", |
| "authors": [ |
| { |
| "first": "B", |
| "middle": [], |
| "last": "Peters", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Dehdari", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Van Genabith", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "Proceedings of the First Workshop on Building Linguistically Generalizable NLP Systems", |
| "volume": "", |
| "issue": "", |
| "pages": "19--26", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Peters, B., Dehdari, J., and van Genabith, J. (2017). Massively multilingual neural grapheme-to- phoneme conversion. In Proceedings of the First Workshop on Building Linguistically Generalizable NLP Systems, pages 19-26.", |
| "links": null |
| }, |
| "BIBREF46": { |
| "ref_id": "b46", |
| "title": "Bridge correlational neural networks for multilingual multimodal representation learning", |
| "authors": [ |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Rajendran", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [ |
| "M" |
| ], |
| "last": "Khapra", |
| "suffix": "" |
| }, |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Chandar", |
| "suffix": "" |
| }, |
| { |
| "first": "B", |
| "middle": [], |
| "last": "Ravindran", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXiv:1510.03519" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Rajendran, J., Khapra, M. M., Chandar, S., and Ravin- dran, B. (2015). Bridge correlational neural net- works for multilingual multimodal representation learning. arXiv preprint arXiv:1510.03519.", |
| "links": null |
| }, |
| "BIBREF47": { |
| "ref_id": "b47", |
| "title": "Generative adversarial text to image synthesis", |
| "authors": [ |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Reed", |
| "suffix": "" |
| }, |
| { |
| "first": "Z", |
| "middle": [], |
| "last": "Akata", |
| "suffix": "" |
| }, |
| { |
| "first": "X", |
| "middle": [], |
| "last": "Yan", |
| "suffix": "" |
| }, |
| { |
| "first": "L", |
| "middle": [], |
| "last": "Logeswaran", |
| "suffix": "" |
| }, |
| { |
| "first": "B", |
| "middle": [], |
| "last": "Schiele", |
| "suffix": "" |
| }, |
| { |
| "first": "H", |
| "middle": [], |
| "last": "Lee", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXiv:1605.05396" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Reed, S., Akata, Z., Yan, X., Logeswaran, L., Schiele, B., and Lee, H. (2016). Generative ad- versarial text to image synthesis. arXiv preprint arXiv:1605.05396.", |
| "links": null |
| }, |
| "BIBREF48": { |
| "ref_id": "b48", |
| "title": "A shared task on multimodal machine translation and crosslingual image description", |
| "authors": [ |
| { |
| "first": "L", |
| "middle": [], |
| "last": "Specia", |
| "suffix": "" |
| }, |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Frank", |
| "suffix": "" |
| }, |
| { |
| "first": "K", |
| "middle": [], |
| "last": "Sima'an", |
| "suffix": "" |
| }, |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Elliott", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "Proceedings of the First Conference on Machine Translation", |
| "volume": "", |
| "issue": "", |
| "pages": "543--553", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Specia, L., Frank, S., Sima'an, K., and Elliott, D. (2016). A shared task on multimodal machine trans- lation and crosslingual image description. In Pro- ceedings of the First Conference on Machine Trans- lation, pages 543-553, Berlin, Germany. Associa- tion for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF49": { |
| "ref_id": "b49", |
| "title": "Order-embeddings of images and language", |
| "authors": [ |
| { |
| "first": "I", |
| "middle": [], |
| "last": "Vendrov", |
| "suffix": "" |
| }, |
| { |
| "first": "R", |
| "middle": [], |
| "last": "Kiros", |
| "suffix": "" |
| }, |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Fidler", |
| "suffix": "" |
| }, |
| { |
| "first": "R", |
| "middle": [], |
| "last": "Urtasun", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXiv:1511.06361" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Vendrov, I., Kiros, R., Fidler, S., and Urtasun, R. (2015). Order-embeddings of images and language. arXiv preprint arXiv:1511.06361.", |
| "links": null |
| }, |
| "BIBREF50": { |
| "ref_id": "b50", |
| "title": "Show and tell: A neural image caption generator", |
| "authors": [ |
| { |
| "first": "O", |
| "middle": [], |
| "last": "Vinyals", |
| "suffix": "" |
| }, |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Toshev", |
| "suffix": "" |
| }, |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Bengio", |
| "suffix": "" |
| }, |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Erhan", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Computer Vision and Pattern Recognition (CVPR), 2015 IEEE Conference on", |
| "volume": "", |
| "issue": "", |
| "pages": "3156--3164", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Vinyals, O., Toshev, A., Bengio, S., and Erhan, D. (2015). Show and tell: A neural image caption gen- erator. In Computer Vision and Pattern Recognition (CVPR), 2015 IEEE Conference on, pages 3156- 3164. IEEE.", |
| "links": null |
| }, |
| "BIBREF52": { |
| "ref_id": "b52", |
| "title": "Neural image caption generation with visual attention", |
| "authors": [ |
| { |
| "first": "", |
| "middle": [], |
| "last": "Show", |
| "suffix": "" |
| } |
| ], |
| "year": null, |
| "venue": "International Conference on Machine Learning", |
| "volume": "", |
| "issue": "", |
| "pages": "2048--2057", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Show, attend and tell: Neural image caption gener- ation with visual attention. In International Confer- ence on Machine Learning, pages 2048-2057.", |
| "links": null |
| }, |
| "BIBREF53": { |
| "ref_id": "b53", |
| "title": "Improving visually grounded sentence representations with selfattention", |
| "authors": [ |
| { |
| "first": "K", |
| "middle": [ |
| "M" |
| ], |
| "last": "Yoo", |
| "suffix": "" |
| }, |
| { |
| "first": "Y", |
| "middle": [], |
| "last": "Shin", |
| "suffix": "" |
| }, |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Lee", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXiv:1712.00609" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Yoo, K. M., Shin, Y., and Lee, S.-g. (2017). Improving visually grounded sentence representations with self- attention. arXiv preprint arXiv:1712.00609.", |
| "links": null |
| }, |
| "BIBREF54": { |
| "ref_id": "b54", |
| "title": "Transfer learning for low-resource neural machine translation", |
| "authors": [ |
| { |
| "first": "B", |
| "middle": [], |
| "last": "Zoph", |
| "suffix": "" |
| }, |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Yuret", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [], |
| "last": "May", |
| "suffix": "" |
| }, |
| { |
| "first": "K", |
| "middle": [], |
| "last": "Knight", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXiv:1604.02201" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Zoph, B., Yuret, D., May, J., and Knight, K. (2016). Transfer learning for low-resource neural machine translation. arXiv preprint arXiv:1604.02201.", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "FIGREF0": { |
| "num": null, |
| "uris": null, |
| "type_str": "figure", |
| "text": "An example taken from the Translation and Comparable portions of the Multi30K dataset. The translation portion (a) contains professional translations of the English captions into German, French, and Czech. The comparable portion (b) consists of five independently crowdsourced English and German descriptions, given only the image. Note that the sentences in (b) convey different information from the English-German translation pair in (a)." |
| }, |
| "FIGREF1": { |
| "num": null, |
| "uris": null, |
| "type_str": "figure", |
| "text": "Pseudo-code of the training procedure used to train our multilingual multi-task model. massively multilingual language representations trained on over 900 languages have been shown to resemble language families" |
| }, |
| "FIGREF2": { |
| "num": null, |
| "uris": null, |
| "type_str": "figure", |
| "text": "Comparing models from the Monolingual, Bilingual and Multilingual settings. The Monolingual and Bilingual models are trained on the downsampled English and German comparable sets with additional c2c objective. The Multilingual model uses the French and Czech translation pairs as additional data. The results are reported on the full 2016 test set of the comparable portion of Multi30K." |
| }, |
| "TABREF3": { |
| "num": null, |
| "content": "<table><tr><td/><td/><td>I\u2192T</td><td/><td>T\u2192I</td><td/></tr><tr><td/><td/><td colspan=\"4\">R@1 R@5 R@10 R@1 R@5 R@10</td></tr><tr><td>Symmetric</td><td>VSE Pivot-Sym Parallel-Sym</td><td>29.3 58.1 26.9 56.6 28.2 57.7</td><td>71.8 70.0 71.3</td><td>20.3 47.2 20.3 46.4 20.9 46.9</td><td>60.1 59.2 59.3</td></tr><tr><td>Asymmetric</td><td colspan=\"2\">OE Pivot-Asym Parallel-Asym 30.2 60.4 26.8 57.5 28.2 61.9</td><td>70.9 73.4 72.8</td><td>21.0 48.5 22.5 49.3 21.8 50.5</td><td>60.4 61.7 62.3</td></tr><tr><td/><td>Monolingual</td><td>34.2 63.0</td><td>74.0</td><td>23.9 49.5</td><td>60.5</td></tr><tr><td/><td>Bilingual</td><td>35.2 64.3</td><td>75.3</td><td>24.6 50.8</td><td>62.0</td></tr><tr><td/><td>+ c2c</td><td>37.9 66.1</td><td>76.8</td><td>26.6 53.0</td><td>64.0</td></tr></table>", |
| "type_str": "table", |
| "html": null, |
| "text": "English Image-to-text (I\u2192T) and text-to-image (T\u2192I) retrieval results on the comparable part of Multi30K, measured by Recall at 1, 5 at 10. Typewriter font shows performance of two sets of symmetric and asymmetric models fromGella et al. (2017)." |
| }, |
| "TABREF5": { |
| "num": null, |
| "content": "<table><tr><td>: R@10 retrieval results on the comparable</td></tr><tr><td>part of Multi30K. Bi-translation is trained on 29K</td></tr><tr><td>translation pair data; bi-comparable is trained by</td></tr><tr><td>downsampling the comparable data to 29K.</td></tr></table>", |
| "type_str": "table", |
| "html": null, |
| "text": "" |
| }, |
| "TABREF7": { |
| "num": null, |
| "content": "<table><tr><td>: R@10 retrieval results on the compara-</td></tr><tr><td>ble part of Multi30K. Full model trained on the</td></tr><tr><td>29K images of the comparable part, Half model on</td></tr><tr><td>14.5K images using random downsampling. For</td></tr><tr><td>Bi-overlap, both English and German captions are</td></tr><tr><td>used for 14.5K images. For Bi-disjoint, 14.5K im-</td></tr><tr><td>ages are used for English and the remaining 14.5K</td></tr><tr><td>images for German.</td></tr></table>", |
| "type_str": "table", |
| "html": null, |
| "text": "" |
| }, |
| "TABREF9": { |
| "num": null, |
| "content": "<table/>", |
| "type_str": "table", |
| "html": null, |
| "text": "The Monolingual and joint Multitranslation models trained on translation pairs, and the Multi-comparable trained on the downsampled comparable set with one caption per image." |
| }, |
| "TABREF11": { |
| "num": null, |
| "content": "<table/>", |
| "type_str": "table", |
| "html": null, |
| "text": "Multilingual is trained on all translation pairs, + Comparable adds the comparable data set." |
| } |
| } |
| } |
| } |