Update annotations for Ekaterina/paper_58.txt
Browse files
annotations/Ekaterina/paper_58.txt.json
CHANGED
|
@@ -22,5 +22,13 @@
|
|
| 22 |
"label": "Format",
|
| 23 |
"user": "Ekaterina",
|
| 24 |
"text": "Burns et al., 2020)"
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 25 |
}
|
| 26 |
]
|
|
|
|
| 22 |
"label": "Format",
|
| 23 |
"user": "Ekaterina",
|
| 24 |
"text": "Burns et al., 2020)"
|
| 25 |
+
},
|
| 26 |
+
{
|
| 27 |
+
"file": "paper_58.txt",
|
| 28 |
+
"start": 2828,
|
| 29 |
+
"end": 3670,
|
| 30 |
+
"label": "Lacks synthesis",
|
| 31 |
+
"user": "Ekaterina",
|
| 32 |
+
"text": "There has been growing interest in combining vision and language for tasks such as visual-guided machine translation (Sigurdsson et al., 2020;Surís et al., 2020;Huang et al., 2020), multi-lingual visual question answering (Gao et al., 2015;Gupta et al., 2020;Shimizu et al., 2018), multi-lingual image captioning (Gu et al., 2018;Lan et al., 2017), multilingual video captioning (Wang et al., 2019b), and multi-lingual image-sentence retrieval Burns et al., 2020). In this paper, we work on multi-lingual vision-and-language navigation. We use vision (i.e., navigation path) as a bridge between multi-lingual instructions and learn a crosslingual representation that captures visual concepts. Furthermore, our approach also use language as a bridge between different visual environments to learn an environment-agnostic visual representation."
|
| 33 |
}
|
| 34 |
]
|