ImanAndrea commited on
Commit
128e37b
·
verified ·
1 Parent(s): 1a7c2cf

Update annotations for Ekaterina/paper_15.txt

Browse files
annotations/Ekaterina/paper_15.txt.json CHANGED
@@ -22,5 +22,13 @@
22
  "label": "Coherence",
23
  "user": "Ekaterina",
24
  "text": "Encouraging results appeared in 2021) proposed a cross-lingual visual pretraining approach."
 
 
 
 
 
 
 
 
25
  }
26
  ]
 
22
  "label": "Coherence",
23
  "user": "Ekaterina",
24
  "text": "Encouraging results appeared in 2021) proposed a cross-lingual visual pretraining approach."
25
+ },
26
+ {
27
+ "file": "paper_15.txt",
28
+ "start": 14,
29
+ "end": 1184,
30
+ "label": "Lacks synthesis",
31
+ "user": "Ekaterina",
32
+ "text": "Multimodal machine translation is a cross-domain task in the filed of machine translation. Early attempts mainly focused on enhancing the MMT model by better incorporation of the vision features (Calixto and Liu, 2017;Elliott and Kádár, 2017;Delbrouck and Dupont, 2017). However, directly encoding the whole image feature brings additional noise to the text (Yao and Wan, 2020;Liu et al., 2021a). To address the above issue, Yao and Wan (2020) proposed a multimodal self-attention to consider the relative difference of information between two modalities. Similarly, Liu et al. (2021a) used a Gumbel Softmax to achieve the same goal.\n\nResearchers also realize that the vision modality maybe redundant. Irrelevant images have little impact on the translation quality, and no significant BLEU drop is observed even the image is absent (Elliott, 2018). Encouraging results appeared in 2021) proposed a cross-lingual visual pretraining approach. In this work, we make a systematic study on whether stronger vision features are helpful. We also extend the research to enhanced features, such as object-detection and image captioning, which is complementary to previous work."
33
  }
34
  ]