| { |
| "paper_id": "2021", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T04:34:08.995716Z" |
| }, |
| "title": "IITP at WAT 2021: System description for English-Hindi Multimodal Translation Task", |
| "authors": [ |
| { |
| "first": "Baban", |
| "middle": [], |
| "last": "Gain", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "gainbaban@gmail.com" |
| }, |
| { |
| "first": "Dibyanayan", |
| "middle": [], |
| "last": "Bandyopadhyay", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "dibyanayan@gmail.com" |
| }, |
| { |
| "first": "Asif", |
| "middle": [], |
| "last": "Ekbal", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "Neural Machine Translation (NMT) is a predominant machine translation technology nowadays because of its end-to-end trainable flexibility. However, NMT still struggles to translate properly in low-resource settings specifically on distant language pairs. One way to overcome this is to use the information from other modalities if available. The idea is that despite differences in languages, both the source and target language speakers see the same thing and the visual representation of both the source and target is the same, which can positively assist the system. Multimodal information can help the NMT system to improve the translation by removing ambiguity on some phrases or words. We participate in the 8th Workshop on Asian Translation (WAT-2021) for English-Hindi multimodal translation task and achieve 42.47 and 37.50 BLEU points for Evaluation and Challenge subset, respectively.", |
| "pdf_parse": { |
| "paper_id": "2021", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "Neural Machine Translation (NMT) is a predominant machine translation technology nowadays because of its end-to-end trainable flexibility. However, NMT still struggles to translate properly in low-resource settings specifically on distant language pairs. One way to overcome this is to use the information from other modalities if available. The idea is that despite differences in languages, both the source and target language speakers see the same thing and the visual representation of both the source and target is the same, which can positively assist the system. Multimodal information can help the NMT system to improve the translation by removing ambiguity on some phrases or words. We participate in the 8th Workshop on Asian Translation (WAT-2021) for English-Hindi multimodal translation task and achieve 42.47 and 37.50 BLEU points for Evaluation and Challenge subset, respectively.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "Recent progress in neural machine translation (NMT) focuses on translating a source language into a particular target language. Various methods have been proposed for this task and most of them deal with the textual data. There are certain drawbacks while performing machine translation using only textual datasets.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Human performs translation which is based upon language grounding: our sense of meaning emerges from interacting with the world. NMT methods do not have any mechanism to perform language grounding; thus they are devoid of capturing the true meaning of sentences or phrases while translating them into the other languages. For example, it * Equal contribution needs to translate the word \"cricket\", it can get confused if it is the game cricket or the insect cricket. But the visual information can clear the ambiguity. Multi-modal translation aims to alleviate this issue by training an NMT model on textual data along with associated images to perform language grounding. This shared task deals with developing multi-modal NMT models for English-Hindi translation. The choice of languages depends on the following issues: i). Hindi is the most spoken language in India and the fourth most spoken language in the world with 600 million speakers 1 . Despite the huge amount of speakers, suitable resources in Hindi is limited due to the various factors. ii) Automatic translation of texts from one language to the another is a difficult task. Specifically, when one or both of them are resource-poor and distant from each other. In Multimodal NMT (MNMT), information from the other modalities like audio, image, video, etc. are used along with text to generate the translation. In low-resource languages, this is particularly used to improve the low-quality translations as even though vocabularies, grammar of two languages are different but their visual representation is the same. There are several proposed multi-modal methods for translations that exploit the features of the associated image for better translation. Stateof-the-art methods might achieve better accuracy than the models we used. Our main motivation for using simplistic models is to demonstrate a proof-of-concept to be used for multi-modal translation among the resource-poor language pairs. We achieved good results on both Challenge and Evaluation set in different evaluation metrics including BLEU, RIBES, AMFM. In subsequent modifications, we aim to develop our models incorporating several state-of-the-art features. The following sections describe our processes in greater details.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "There have been many attempts to use information other than the source for better translation. Uni-modal systems include document-level NMT (Wang et al., 2017) , sentence-level NMT with contextual information (Gain et al., 2021), etc. Among multimodal systems, (Huang et al., 2016) used an object detection system and extracted local and global image features. Thereafter, they used those image features as additional inputs to encoder and decoder. (Delbrouck and Dupont, 2017) used attention mechanism on visual inputs for the source hidden states. (Lin et al., 2020) used Dynamic Context-guided Capsule Network (Sabour et al., 2017 ) (DCCN) for iterative extraction of related visual features.", |
| "cite_spans": [ |
| { |
| "start": 140, |
| "end": 159, |
| "text": "(Wang et al., 2017)", |
| "ref_id": "BIBREF22" |
| }, |
| { |
| "start": 261, |
| "end": 281, |
| "text": "(Huang et al., 2016)", |
| "ref_id": "BIBREF6" |
| }, |
| { |
| "start": 449, |
| "end": 477, |
| "text": "(Delbrouck and Dupont, 2017)", |
| "ref_id": "BIBREF3" |
| }, |
| { |
| "start": 550, |
| "end": 568, |
| "text": "(Lin et al., 2020)", |
| "ref_id": "BIBREF11" |
| }, |
| { |
| "start": 613, |
| "end": 633, |
| "text": "(Sabour et al., 2017", |
| "ref_id": "BIBREF16" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Works", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Multimodal Machine Translation (MMT) for English-Hindi has not been well explored yet. (Dutta Chowdhury et al., 2018) used synthetic data for training. Furthermore they used multi-modal, attention-based MMT which incorporate visual features into different parts of both the encoder and the decoder . (Sanayai Meetei et al., 2019) used a Recurrent Neural Network (RNN) based approach achieving BLEU score of 28.45 on Evaluation set and 12.58 on Challenge set. (Laskar et al., 2020) exploited monolingual data for better translation. Recent works tried to focus on developing unsupervised model for multi-modal NMT. Su et al. (2018) demonstrated an unsupervised method based on the language translation cycle consistency loss conditional on the image. This is done to learn the bidirectional multi-modal translation simultaneously. Moreover, Su et al. (2021) showed that jointly learning text-image interaction instead of modeling them separately using attentional networks is more useful. This result is in line with several state-of-the-art visual transformer related models, such as VisualBERT (Li et al., 2019) , UNITER (Chen et al., 2019) etc.", |
| "cite_spans": [ |
| { |
| "start": 614, |
| "end": 630, |
| "text": "Su et al. (2018)", |
| "ref_id": "BIBREF21" |
| }, |
| { |
| "start": 840, |
| "end": 856, |
| "text": "Su et al. (2021)", |
| "ref_id": "BIBREF20" |
| }, |
| { |
| "start": 1095, |
| "end": 1112, |
| "text": "(Li et al., 2019)", |
| "ref_id": "BIBREF10" |
| }, |
| { |
| "start": 1122, |
| "end": 1141, |
| "text": "(Chen et al., 2019)", |
| "ref_id": "BIBREF2" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Works", |
| "sec_num": "2" |
| }, |
| { |
| "text": "We use Hindi Visual Genome 1.1 dataset (Parida et al., 2019) (Nakazawa et al., 2020) (Nakazawa et al., 2021) for our experiments. This dataset consists of 28,929 parallel English-Hindi sentence Table 1 .", |
| "cite_spans": [ |
| { |
| "start": 39, |
| "end": 60, |
| "text": "(Parida et al., 2019)", |
| "ref_id": "BIBREF15" |
| }, |
| { |
| "start": 61, |
| "end": 84, |
| "text": "(Nakazawa et al., 2020)", |
| "ref_id": "BIBREF14" |
| }, |
| { |
| "start": 85, |
| "end": 108, |
| "text": "(Nakazawa et al., 2021)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 194, |
| "end": 201, |
| "text": "Table 1", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Dataset Description", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "Multimodal dataset consists of an image along with a description of certain rectangular portion of the image. We are given the coordinates of the portion. We aim to translate the description with help of the image. An example of multimodal dataset is given in Figure 1 .", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 260, |
| "end": 268, |
| "text": "Figure 1", |
| "ref_id": "FIGREF0" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Dataset Description", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "For text data, we lowercase all the utterances. Then, we jointly learn byte-pair-encoding (Sennrich et al., 2016) combining both source and target with a vocabulary of 10,000. We treat the images by cropping a specified rectangular portions. This operation is used to discard the portions that do not contribute much to the translation performance. After we get those cropped-out images, we use the pre-trained VGG19-bn (Simonyan and Zisserman, 2015) to obtain the image representations. We use OpenNMT-py (Klein et al., 2017) framework to perform this step.", |
| "cite_spans": [ |
| { |
| "start": 90, |
| "end": 113, |
| "text": "(Sennrich et al., 2016)", |
| "ref_id": "BIBREF18" |
| }, |
| { |
| "start": 420, |
| "end": 450, |
| "text": "(Simonyan and Zisserman, 2015)", |
| "ref_id": "BIBREF19" |
| }, |
| { |
| "start": 506, |
| "end": 526, |
| "text": "(Klein et al., 2017)", |
| "ref_id": "BIBREF8" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Pre-processing", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "We use OpenNMT-py (Klein et al., 2017) for our NMT systems. We use Bidirectional RNN encoder and doubly attentive RNN decoder for our experiments. We train our system in two ways viz. With pre-training, and Without pre-training.:", |
| "cite_spans": [ |
| { |
| "start": 18, |
| "end": 38, |
| "text": "(Klein et al., 2017)", |
| "ref_id": "BIBREF8" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Training", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "1. With pre-training We pre-train one of our models on HindEnCorp dataset. This step does not use any visual features as the dataset used for pre-training is devoid of any visual Figure 2 : An example of translation generated by the system. Here, the target is Ek vyakti railgari mein chad raha hai (A man climbing into train.) The translation by Google NMT system is Train mein chadta Aadmi (Man climbs into train); whereas our NMT system translates it as: Ek aadmi ek train mein chadta hai (A man climbs into a train.)", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 179, |
| "end": 187, |
| "text": "Figure 2", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Training", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "features. After pre-training, we fine-tune the pre-trained model with VisualGenome dataset containing textual and visual features.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Training", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "2. Without pre-training We do not pre-train the model. We directly fine-tune the models on VisualGenome dataset which contains both text and associated image. Consequently, both textual and visual features are used.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Training", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "Following step is taken into account while doing inference step:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Training", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "We take the best hypothesis from both the models and filter out any hypothesis containing <unk>token. Then, we pick the hypothesis with best log-likelihood during generation.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Training", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "We set the word embedding size and size of RNN hidden states to 500. We set the batch size to 40 and train for a maximum 25 epochs. We restrict maximum source and target sequence length to 50. We use the Adam optimizer (Kingma and Ba, 2017) for optimization with \u03b2 1 = 0.9 and \u03b2 2 = 0.999. During training, we use 0.3 as dropout rate to avoid over-fitting. During generation of translation, we use 5 as the beam width.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Hyper-parameters", |
| "sec_num": "3.4" |
| }, |
| { |
| "text": "We obtain impressive results on our submissions. There are two sets designed for evaluating our model, i) Evaluation set, ii) Challenge set. We evaluate our model on both of these test set and tabulate our results in Table 2 . We use different evaluation metrics (BLEU, RIBES, AMFM) to test our model. The results shown in the table are sorted according to the obtained BLEU scores. As it can be seen from Table 2, we obtain 42.47 BLEU points and achieve second position in terms of BLEU on Evaluation set on multimodal task. Please refer to Figure 2 for example of translation by our system. We obtain 37.50 BLEU points on Challenge set. One reason for not so good results on Challenge set could be:", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 217, |
| "end": 224, |
| "text": "Table 2", |
| "ref_id": "TABREF2" |
| }, |
| { |
| "start": 542, |
| "end": 550, |
| "text": "Figure 2", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Experimental Results", |
| "sec_num": "4" |
| }, |
| { |
| "text": "\u2022 The challenge test set was created by searching for (particularly) ambiguous English words based on the embedding similarity and manually selecting those where the image helps to resolve the ambiguity. Hence, it is difficult to translate compared to the Evaluation set, which was randomly selected. ", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experimental Results", |
| "sec_num": "4" |
| }, |
| { |
| "text": "We participate in WAT-2021 Multimodal Translation Task for English to Hindi. We achieve good results on both the Challenge and Evaluation sets achieving 42.47 and 37.50 BLEU points, respectively. We rank second place on Evaluation set and third place on Challenge set on WAT-2021 Multimodal Translation Task for English to Hindi. In future, we would like to extend our work by training with additional monolingual data and better ways to incorporate multimodal features.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusion", |
| "sec_num": "5" |
| }, |
| { |
| "text": "https://www.ethnologue.com/guides/ethnologue200", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| } |
| ], |
| "back_matter": [], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "Incorporating global visual features into attention-based neural machine translation", |
| "authors": [ |
| { |
| "first": "Iacer", |
| "middle": [], |
| "last": "Calixto", |
| "suffix": "" |
| }, |
| { |
| "first": "Qun", |
| "middle": [], |
| "last": "Liu", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "992--1003", |
| "other_ids": { |
| "DOI": [ |
| "10.18653/v1/D17-1105" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Iacer Calixto and Qun Liu. 2017. Incorporating global visual features into attention-based neural machine translation. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Process- ing, pages 992-1003, Copenhagen, Denmark. Asso- ciation for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "Doubly-attentive decoder for multi-modal neural machine translation", |
| "authors": [ |
| { |
| "first": "Iacer", |
| "middle": [], |
| "last": "Calixto", |
| "suffix": "" |
| }, |
| { |
| "first": "Qun", |
| "middle": [], |
| "last": "Liu", |
| "suffix": "" |
| }, |
| { |
| "first": "Nick", |
| "middle": [], |
| "last": "Campbell", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics", |
| "volume": "1", |
| "issue": "", |
| "pages": "1913--1924", |
| "other_ids": { |
| "DOI": [ |
| "10.18653/v1/P17-1175" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Iacer Calixto, Qun Liu, and Nick Campbell. 2017. Doubly-attentive decoder for multi-modal neural machine translation. In Proceedings of the 55th An- nual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1913- 1924, Vancouver, Canada. Association for Computa- tional Linguistics.", |
| "links": null |
| }, |
| "BIBREF2": { |
| "ref_id": "b2", |
| "title": "UNITER: learning universal image-text representations", |
| "authors": [ |
| { |
| "first": "Yen-Chun", |
| "middle": [], |
| "last": "Chen", |
| "suffix": "" |
| }, |
| { |
| "first": "Linjie", |
| "middle": [], |
| "last": "Li", |
| "suffix": "" |
| }, |
| { |
| "first": "Licheng", |
| "middle": [], |
| "last": "Yu", |
| "suffix": "" |
| }, |
| { |
| "first": "Ahmed", |
| "middle": [ |
| "El" |
| ], |
| "last": "Kholy", |
| "suffix": "" |
| }, |
| { |
| "first": "Faisal", |
| "middle": [], |
| "last": "Ahmed", |
| "suffix": "" |
| }, |
| { |
| "first": "Zhe", |
| "middle": [], |
| "last": "Gan", |
| "suffix": "" |
| }, |
| { |
| "first": "Yu", |
| "middle": [], |
| "last": "Cheng", |
| "suffix": "" |
| }, |
| { |
| "first": "Jingjing", |
| "middle": [], |
| "last": "Liu", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Yen-Chun Chen, Linjie Li, Licheng Yu, Ahmed El Kholy, Faisal Ahmed, Zhe Gan, Yu Cheng, and Jingjing Liu. 2019. UNITER: learning universal image-text representations. CoRR, abs/1909.11740.", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "Modulating and attending the source image during encoding improves multimodal translation", |
| "authors": [ |
| { |
| "first": "Jean-Benoit", |
| "middle": [], |
| "last": "Delbrouck", |
| "suffix": "" |
| }, |
| { |
| "first": "St\u00e9phane", |
| "middle": [], |
| "last": "Dupont", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jean-Benoit Delbrouck and St\u00e9phane Dupont. 2017. Modulating and attending the source image during encoding improves multimodal translation.", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "Multimodal neural machine translation for low-resource language pairs using synthetic data", |
| "authors": [ |
| { |
| "first": "Mohammed", |
| "middle": [], |
| "last": "Koel Dutta Chowdhury", |
| "suffix": "" |
| }, |
| { |
| "first": "Qun", |
| "middle": [], |
| "last": "Hasanuzzaman", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Liu", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "Proceedings of the Workshop on Deep Learning Approaches for Low-Resource NLP", |
| "volume": "", |
| "issue": "", |
| "pages": "33--42", |
| "other_ids": { |
| "DOI": [ |
| "10.18653/v1/W18-3405" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Koel Dutta Chowdhury, Mohammed Hasanuzzaman, and Qun Liu. 2018. Multimodal neural machine translation for low-resource language pairs using synthetic data. In Proceedings of the Workshop on Deep Learning Approaches for Low-Resource NLP, pages 33-42, Melbourne. Association for Computa- tional Linguistics.", |
| "links": null |
| }, |
| "BIBREF5": { |
| "ref_id": "b5", |
| "title": "Not all contexts are important: The impact of effective context in conversational neural machine translation", |
| "authors": [ |
| { |
| "first": "Rejwanul", |
| "middle": [], |
| "last": "Baban Gain", |
| "suffix": "" |
| }, |
| { |
| "first": "Asif", |
| "middle": [], |
| "last": "Haque", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Ekbal", |
| "suffix": "" |
| } |
| ], |
| "year": 2021, |
| "venue": "2021 International Joint Conference on Neural Networks (IJCNN)", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Baban Gain, Rejwanul Haque, and Asif Ekbal. 2021. Not all contexts are important: The impact of effec- tive context in conversational neural machine trans- lation. In 2021 International Joint Conference on Neural Networks (IJCNN).", |
| "links": null |
| }, |
| "BIBREF6": { |
| "ref_id": "b6", |
| "title": "Attention-based multimodal neural machine translation", |
| "authors": [ |
| { |
| "first": "Po-Yao", |
| "middle": [], |
| "last": "Huang", |
| "suffix": "" |
| }, |
| { |
| "first": "Frederick", |
| "middle": [], |
| "last": "Liu", |
| "suffix": "" |
| }, |
| { |
| "first": "Sz-Rung", |
| "middle": [], |
| "last": "Shiang", |
| "suffix": "" |
| }, |
| { |
| "first": "Jean", |
| "middle": [], |
| "last": "Oh", |
| "suffix": "" |
| }, |
| { |
| "first": "Chris", |
| "middle": [], |
| "last": "Dyer", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "Proceedings of the First Conference on Machine Translation", |
| "volume": "", |
| "issue": "", |
| "pages": "639--645", |
| "other_ids": { |
| "DOI": [ |
| "10.18653/v1/W16-2360" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Po-Yao Huang, Frederick Liu, Sz-Rung Shiang, Jean Oh, and Chris Dyer. 2016. Attention-based multi- modal neural machine translation. In Proceedings of the First Conference on Machine Translation: Vol- ume 2, Shared Task Papers, pages 639-645, Berlin, Germany. Association for Computational Linguis- tics.", |
| "links": null |
| }, |
| "BIBREF7": { |
| "ref_id": "b7", |
| "title": "Adam: A method for stochastic optimization", |
| "authors": [ |
| { |
| "first": "P", |
| "middle": [], |
| "last": "Diederik", |
| "suffix": "" |
| }, |
| { |
| "first": "Jimmy", |
| "middle": [], |
| "last": "Kingma", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Ba", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Diederik P. Kingma and Jimmy Ba. 2017. Adam: A method for stochastic optimization.", |
| "links": null |
| }, |
| "BIBREF8": { |
| "ref_id": "b8", |
| "title": "OpenNMT: Opensource toolkit for neural machine translation", |
| "authors": [ |
| { |
| "first": "Guillaume", |
| "middle": [], |
| "last": "Klein", |
| "suffix": "" |
| }, |
| { |
| "first": "Yoon", |
| "middle": [], |
| "last": "Kim", |
| "suffix": "" |
| }, |
| { |
| "first": "Yuntian", |
| "middle": [], |
| "last": "Deng", |
| "suffix": "" |
| }, |
| { |
| "first": "Jean", |
| "middle": [], |
| "last": "Senellart", |
| "suffix": "" |
| }, |
| { |
| "first": "Alexander", |
| "middle": [], |
| "last": "Rush", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "Proceedings of ACL 2017, System Demonstrations", |
| "volume": "", |
| "issue": "", |
| "pages": "67--72", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Guillaume Klein, Yoon Kim, Yuntian Deng, Jean Senel- lart, and Alexander Rush. 2017. OpenNMT: Open- source toolkit for neural machine translation. In Proceedings of ACL 2017, System Demonstrations, pages 67-72, Vancouver, Canada. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF9": { |
| "ref_id": "b9", |
| "title": "Multimodal neural machine translation for English to Hindi", |
| "authors": [ |
| { |
| "first": "Abdullah", |
| "middle": [], |
| "last": "Sahinur Rahman Laskar", |
| "suffix": "" |
| }, |
| { |
| "first": "Partha", |
| "middle": [], |
| "last": "Faiz Ur Rahman Khilji", |
| "suffix": "" |
| }, |
| { |
| "first": "Sivaji", |
| "middle": [], |
| "last": "Pakray", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Bandyopadhyay", |
| "suffix": "" |
| } |
| ], |
| "year": 2020, |
| "venue": "Proceedings of the 7th Workshop on Asian Translation", |
| "volume": "", |
| "issue": "", |
| "pages": "109--113", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Sahinur Rahman Laskar, Abdullah Faiz Ur Rahman Khilji, Partha Pakray, and Sivaji Bandyopadhyay. 2020. Multimodal neural machine translation for English to Hindi. In Proceedings of the 7th Work- shop on Asian Translation, pages 109-113, Suzhou, China. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF10": { |
| "ref_id": "b10", |
| "title": "Visualbert: A simple and performant baseline for vision and language", |
| "authors": [ |
| { |
| "first": "Liunian Harold", |
| "middle": [], |
| "last": "Li", |
| "suffix": "" |
| }, |
| { |
| "first": "Mark", |
| "middle": [], |
| "last": "Yatskar", |
| "suffix": "" |
| }, |
| { |
| "first": "Da", |
| "middle": [], |
| "last": "Yin", |
| "suffix": "" |
| }, |
| { |
| "first": "Cho-Jui", |
| "middle": [], |
| "last": "Hsieh", |
| "suffix": "" |
| }, |
| { |
| "first": "Kai-Wei", |
| "middle": [], |
| "last": "Chang", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Liunian Harold Li, Mark Yatskar, Da Yin, Cho-Jui Hsieh, and Kai-Wei Chang. 2019. Visualbert: A simple and performant baseline for vision and lan- guage. CoRR, abs/1908.03557.", |
| "links": null |
| }, |
| "BIBREF11": { |
| "ref_id": "b11", |
| "title": "Dynamic context-guided capsule network for multimodal machine translation", |
| "authors": [ |
| { |
| "first": "Huan", |
| "middle": [], |
| "last": "Lin", |
| "suffix": "" |
| }, |
| { |
| "first": "Fandong", |
| "middle": [], |
| "last": "Meng", |
| "suffix": "" |
| }, |
| { |
| "first": "Jinsong", |
| "middle": [], |
| "last": "Su", |
| "suffix": "" |
| }, |
| { |
| "first": "Yongjing", |
| "middle": [], |
| "last": "Yin", |
| "suffix": "" |
| }, |
| { |
| "first": "Zhengyuan", |
| "middle": [], |
| "last": "Yang", |
| "suffix": "" |
| }, |
| { |
| "first": "Yubin", |
| "middle": [], |
| "last": "Ge", |
| "suffix": "" |
| }, |
| { |
| "first": "Jie", |
| "middle": [], |
| "last": "Zhou", |
| "suffix": "" |
| }, |
| { |
| "first": "Jiebo", |
| "middle": [], |
| "last": "Luo", |
| "suffix": "" |
| } |
| ], |
| "year": 2020, |
| "venue": "Proceedings of the 28th ACM International Conference on Multimedia", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "DOI": [ |
| "10.1145/3394171.3413715" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Huan Lin, Fandong Meng, Jinsong Su, Yongjing Yin, Zhengyuan Yang, Yubin Ge, Jie Zhou, and Jiebo Luo. 2020. Dynamic context-guided capsule net- work for multimodal machine translation. Proceed- ings of the 28th ACM International Conference on Multimedia.", |
| "links": null |
| }, |
| "BIBREF12": { |
| "ref_id": "b12", |
| "title": "Shantipriya Parida", |
| "authors": [ |
| { |
| "first": "Toshiaki", |
| "middle": [], |
| "last": "Nakazawa", |
| "suffix": "" |
| }, |
| { |
| "first": "Hideki", |
| "middle": [], |
| "last": "Nakayama", |
| "suffix": "" |
| }, |
| { |
| "first": "Chenchen", |
| "middle": [], |
| "last": "Ding", |
| "suffix": "" |
| }, |
| { |
| "first": "Raj", |
| "middle": [], |
| "last": "Dabre", |
| "suffix": "" |
| }, |
| { |
| "first": "Shohei", |
| "middle": [], |
| "last": "Higashiyama", |
| "suffix": "" |
| }, |
| { |
| "first": "Hideya", |
| "middle": [], |
| "last": "Mino", |
| "suffix": "" |
| } |
| ], |
| "year": null, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Toshiaki Nakazawa, Hideki Nakayama, Chenchen Ding, Raj Dabre, Shohei Higashiyama, Hideya Mino, Isao Goto, Win Pa Pa, Anoop Kunchukut- tan, Shantipriya Parida, Ond\u0159ej Bojar, Chenhui", |
| "links": null |
| }, |
| "BIBREF13": { |
| "ref_id": "b13", |
| "title": "Overview of the 8th workshop on Asian translation", |
| "authors": [ |
| { |
| "first": "Akiko", |
| "middle": [], |
| "last": "Chu", |
| "suffix": "" |
| }, |
| { |
| "first": "Kaori", |
| "middle": [], |
| "last": "Eriguchi", |
| "suffix": "" |
| }, |
| { |
| "first": "Sadao", |
| "middle": [], |
| "last": "Abe", |
| "suffix": "" |
| }, |
| { |
| "first": "Yusuke", |
| "middle": [], |
| "last": "Oda", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Kurohashi", |
| "suffix": "" |
| } |
| ], |
| "year": 2021, |
| "venue": "Proceedings of the 8th Workshop on Asian Translation", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Chu, Akiko Eriguchi, Kaori Abe, and Sadao Oda, Yusuke Kurohashi. 2021. Overview of the 8th work- shop on Asian translation. In Proceedings of the 8th Workshop on Asian Translation, Bangkok, Thailand. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF14": { |
| "ref_id": "b14", |
| "title": "Overview of the 7th workshop on Asian translation", |
| "authors": [ |
| { |
| "first": "Toshiaki", |
| "middle": [], |
| "last": "Nakazawa", |
| "suffix": "" |
| }, |
| { |
| "first": "Hideki", |
| "middle": [], |
| "last": "Nakayama", |
| "suffix": "" |
| }, |
| { |
| "first": "Chenchen", |
| "middle": [], |
| "last": "Ding", |
| "suffix": "" |
| }, |
| { |
| "first": "Raj", |
| "middle": [], |
| "last": "Dabre", |
| "suffix": "" |
| }, |
| { |
| "first": "Shohei", |
| "middle": [], |
| "last": "Higashiyama", |
| "suffix": "" |
| }, |
| { |
| "first": "Hideya", |
| "middle": [], |
| "last": "Mino", |
| "suffix": "" |
| }, |
| { |
| "first": "Isao", |
| "middle": [], |
| "last": "Goto", |
| "suffix": "" |
| }, |
| { |
| "first": "Win", |
| "middle": [ |
| "Pa" |
| ], |
| "last": "Pa", |
| "suffix": "" |
| }, |
| { |
| "first": "Anoop", |
| "middle": [], |
| "last": "Kunchukuttan", |
| "suffix": "" |
| }, |
| { |
| "first": "Shantipriya", |
| "middle": [], |
| "last": "Parida", |
| "suffix": "" |
| }, |
| { |
| "first": "Ond\u0159ej", |
| "middle": [], |
| "last": "Bojar", |
| "suffix": "" |
| }, |
| { |
| "first": "Sadao", |
| "middle": [], |
| "last": "Kurohashi", |
| "suffix": "" |
| } |
| ], |
| "year": 2020, |
| "venue": "Proceedings of the 7th Workshop on Asian Translation", |
| "volume": "", |
| "issue": "", |
| "pages": "1--44", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Toshiaki Nakazawa, Hideki Nakayama, Chenchen Ding, Raj Dabre, Shohei Higashiyama, Hideya Mino, Isao Goto, Win Pa Pa, Anoop Kunchukut- tan, Shantipriya Parida, Ond\u0159ej Bojar, and Sadao Kurohashi. 2020. Overview of the 7th workshop on Asian translation. In Proceedings of the 7th Work- shop on Asian Translation, pages 1-44, Suzhou, China. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF15": { |
| "ref_id": "b15", |
| "title": "Hindi Visual Genome: A Dataset for Multimodal English-to-Hindi Machine Translation", |
| "authors": [ |
| { |
| "first": "Shantipriya", |
| "middle": [], |
| "last": "Parida", |
| "suffix": "" |
| }, |
| { |
| "first": "Ond\u0159ej", |
| "middle": [], |
| "last": "Bojar", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Satya Ranjan", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Dash", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "Presented at CICLing", |
| "volume": "23", |
| "issue": "", |
| "pages": "1499--1505", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Shantipriya Parida, Ond\u0159ej Bojar, and Satya Ranjan Dash. 2019. Hindi Visual Genome: A Dataset for Multimodal English-to-Hindi Machine Translation. Computaci\u00f3n y Sistemas, 23(4):1499-1505. Pre- sented at CICLing 2019, La Rochelle, France.", |
| "links": null |
| }, |
| "BIBREF16": { |
| "ref_id": "b16", |
| "title": "Dynamic routing between capsules", |
| "authors": [ |
| { |
| "first": "Sara", |
| "middle": [], |
| "last": "Sabour", |
| "suffix": "" |
| }, |
| { |
| "first": "Nicholas", |
| "middle": [], |
| "last": "Frosst", |
| "suffix": "" |
| }, |
| { |
| "first": "Geoffrey", |
| "middle": [ |
| "E" |
| ], |
| "last": "Hinton", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Sara Sabour, Nicholas Frosst, and Geoffrey E Hinton. 2017. Dynamic routing between capsules.", |
| "links": null |
| }, |
| "BIBREF17": { |
| "ref_id": "b17", |
| "title": "WAT2019: English-Hindi translation on Hindi visual genome dataset", |
| "authors": [ |
| { |
| "first": "Thoudam Doren", |
| "middle": [], |
| "last": "Loitongbam Sanayai Meetei", |
| "suffix": "" |
| }, |
| { |
| "first": "Sivaji", |
| "middle": [], |
| "last": "Singh", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Bandyopadhyay", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "Proceedings of the 6th Workshop on Asian Translation", |
| "volume": "", |
| "issue": "", |
| "pages": "181--188", |
| "other_ids": { |
| "DOI": [ |
| "10.18653/v1/D19-5224" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Loitongbam Sanayai Meetei, Thoudam Doren Singh, and Sivaji Bandyopadhyay. 2019. WAT2019: English-Hindi translation on Hindi visual genome dataset. In Proceedings of the 6th Workshop on Asian Translation, pages 181-188, Hong Kong, China. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF18": { |
| "ref_id": "b18", |
| "title": "Neural machine translation of rare words with subword units", |
| "authors": [ |
| { |
| "first": "Rico", |
| "middle": [], |
| "last": "Sennrich", |
| "suffix": "" |
| }, |
| { |
| "first": "Barry", |
| "middle": [], |
| "last": "Haddow", |
| "suffix": "" |
| }, |
| { |
| "first": "Alexandra", |
| "middle": [], |
| "last": "Birch", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics", |
| "volume": "1", |
| "issue": "", |
| "pages": "1715--1725", |
| "other_ids": { |
| "DOI": [ |
| "10.18653/v1/P16-1162" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. In Proceedings of the 54th An- nual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1715- 1725, Berlin, Germany. Association for Computa- tional Linguistics.", |
| "links": null |
| }, |
| "BIBREF19": { |
| "ref_id": "b19", |
| "title": "Very deep convolutional networks for large-scale image recognition", |
| "authors": [ |
| { |
| "first": "Karen", |
| "middle": [], |
| "last": "Simonyan", |
| "suffix": "" |
| }, |
| { |
| "first": "Andrew", |
| "middle": [], |
| "last": "Zisserman", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Karen Simonyan and Andrew Zisserman. 2015. Very deep convolutional networks for large-scale image recognition.", |
| "links": null |
| }, |
| "BIBREF20": { |
| "ref_id": "b20", |
| "title": "Multi-modal neural machine translation with deep semantic interactions", |
| "authors": [ |
| { |
| "first": "Jinsong", |
| "middle": [], |
| "last": "Su", |
| "suffix": "" |
| }, |
| { |
| "first": "Jinchang", |
| "middle": [], |
| "last": "Chen", |
| "suffix": "" |
| }, |
| { |
| "first": "Hui", |
| "middle": [], |
| "last": "Jiang", |
| "suffix": "" |
| }, |
| { |
| "first": "Chulun", |
| "middle": [], |
| "last": "Zhou", |
| "suffix": "" |
| }, |
| { |
| "first": "Huan", |
| "middle": [], |
| "last": "Lin", |
| "suffix": "" |
| }, |
| { |
| "first": "Yubin", |
| "middle": [], |
| "last": "Ge", |
| "suffix": "" |
| }, |
| { |
| "first": "Qingqiang", |
| "middle": [], |
| "last": "Wu", |
| "suffix": "" |
| }, |
| { |
| "first": "Yongxuan", |
| "middle": [], |
| "last": "Lai", |
| "suffix": "" |
| } |
| ], |
| "year": 2021, |
| "venue": "Information Sciences", |
| "volume": "554", |
| "issue": "", |
| "pages": "47--60", |
| "other_ids": { |
| "DOI": [ |
| "10.1016/j.ins.2020.11.024" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jinsong Su, Jinchang Chen, Hui Jiang, Chulun Zhou, Huan Lin, Yubin Ge, Qingqiang Wu, and Yongx- uan Lai. 2021. Multi-modal neural machine trans- lation with deep semantic interactions. Information Sciences, 554:47-60.", |
| "links": null |
| }, |
| "BIBREF21": { |
| "ref_id": "b21", |
| "title": "Unsupervised multi-modal neural machine translation", |
| "authors": [ |
| { |
| "first": "Yuanhang", |
| "middle": [], |
| "last": "Su", |
| "suffix": "" |
| }, |
| { |
| "first": "Kai", |
| "middle": [], |
| "last": "Fan", |
| "suffix": "" |
| }, |
| { |
| "first": "Nguyen", |
| "middle": [], |
| "last": "Bach", |
| "suffix": "" |
| }, |
| { |
| "first": "C.-C. Jay", |
| "middle": [], |
| "last": "Kuo", |
| "suffix": "" |
| }, |
| { |
| "first": "Fei", |
| "middle": [], |
| "last": "Huang", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Yuanhang Su, Kai Fan, Nguyen Bach, C.-C. Jay Kuo, and Fei Huang. 2018. Unsupervised multi-modal neural machine translation. CoRR, abs/1811.11365.", |
| "links": null |
| }, |
| "BIBREF22": { |
| "ref_id": "b22", |
| "title": "Exploiting cross-sentence context for neural machine translation", |
| "authors": [ |
| { |
| "first": "Longyue", |
| "middle": [], |
| "last": "Wang", |
| "suffix": "" |
| }, |
| { |
| "first": "Zhaopeng", |
| "middle": [], |
| "last": "Tu", |
| "suffix": "" |
| }, |
| { |
| "first": "Andy", |
| "middle": [], |
| "last": "Way", |
| "suffix": "" |
| }, |
| { |
| "first": "Qun", |
| "middle": [], |
| "last": "Liu", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "2826--2831", |
| "other_ids": { |
| "DOI": [ |
| "10.18653/v1/D17-1301" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Longyue Wang, Zhaopeng Tu, Andy Way, and Qun Liu. 2017. Exploiting cross-sentence context for neural machine translation. In Proceedings of the 2017 Conference on Empirical Methods in Natural Lan- guage Processing, pages 2826-2831, Copenhagen, Denmark. Association for Computational Linguis- tics.", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "FIGREF0": { |
| "type_str": "figure", |
| "num": null, |
| "text": "An example of multimodal dataset pairs along with the associated images. Furthermore, we use HindEnCorp dataset for pre-training containing 273K English-Hindi sentence pairs without images. Statistics of the datasets are shown in", |
| "uris": null |
| }, |
| "TABREF1": { |
| "type_str": "table", |
| "num": null, |
| "html": null, |
| "content": "<table><tr><td>Team</td><td>BLEU</td><td>Evaluation RIBES</td><td>AMFM BLEU</td><td>Challenge RIBES</td><td>AMFM</td></tr><tr><td>Volta</td><td colspan=\"5\">44.21 0.818689 0.835480 52.02 0.854139 0.874220</td></tr><tr><td>iitp (Ours)</td><td colspan=\"2\">42.47 0</td><td/><td/><td/></tr><tr><td/><td/><td/><td colspan=\"3\">\u2022 Difference between utterance length during</td></tr><tr><td/><td/><td/><td colspan=\"3\">training and testing, i.e. while average length</td></tr></table>", |
| "text": ".807123 0.819720 37.50 0.790809 0.830230 CNLP-NITS 40.51 0.803208 0.820980 39.28 0.792097 0.812360 CNLP-NITS 39.46 0.802055 0.823270 33.57 0.754141 0.787320 Organizer 38.63 0.767422 0.772870 20.34 0.644230 0.669760" |
| }, |
| "TABREF2": { |
| "type_str": "table", |
| "num": null, |
| "html": null, |
| "content": "<table/>", |
| "text": "Details of obtained results by different submissions of Train, Evaluation and Validation set is 5 but average length of Challenge set is 6." |
| } |
| } |
| } |
| } |