Update annotations for Iraa/paper_11.txt
Browse files
annotations/Iraa/paper_11.txt.json
CHANGED
|
@@ -14,5 +14,13 @@
|
|
| 14 |
"label": "Unsupported claim",
|
| 15 |
"user": "Iraa",
|
| 16 |
"text": "multilingual BERT"
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 17 |
}
|
| 18 |
]
|
|
|
|
| 14 |
"label": "Unsupported claim",
|
| 15 |
"user": "Iraa",
|
| 16 |
"text": "multilingual BERT"
|
| 17 |
+
},
|
| 18 |
+
{
|
| 19 |
+
"file": "paper_11.txt",
|
| 20 |
+
"start": 1961,
|
| 21 |
+
"end": 2746,
|
| 22 |
+
"label": "Coherence",
|
| 23 |
+
"user": "Iraa",
|
| 24 |
+
"text": "Can a highperformance CLIR model be trained that can operate without having to rely on MT? To answer the question, instead of viewing the MT-based approach as a competing one, we propose to leverage its strength via knowledge distillation (KD) into an end-to-end CLIR model. KD (Hinton et al., 2014) is a powerful supervision technique typically used to distill the knowledge of a large teacher model about some task into a smaller student model (Mukherjee and Awadallah, 2020;Turc et al., 2020). Here we propose to use it in a slightly different context, where the teacher and the student retriever are identical in size, but the former has superior performance simply due to utilizing MT output and consequently operating in a high-resource and lowdifficulty monolingual environment."
|
| 25 |
}
|
| 26 |
]
|