Update README.md
Browse files
README.md
CHANGED
|
@@ -41,6 +41,23 @@ The fourth dataset, considered 'silver', was generated through back-translation
|
|
| 41 |
A back-translated subset of the Arabic sentences in [OPUS](https://huggingface.co/datasets/Helsinki-NLP/opus-100)
|
| 42 |
|
| 43 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 44 |
## Training procedure
|
| 45 |
|
| 46 |
The model was trained by fully fine-tuning [UBC-NLP/AraT5v2-base-1024](https://huggingface.co/UBC-NLP/AraT5v2-base-1024) for one epoch only.
|
|
|
|
| 41 |
A back-translated subset of the Arabic sentences in [OPUS](https://huggingface.co/datasets/Helsinki-NLP/opus-100)
|
| 42 |
|
| 43 |
|
| 44 |
+
### Evaluation results
|
| 45 |
+
|
| 46 |
+
BLEU score on the development split of Task 2: Dialect to MSA Machine Translation under the 6th Workshop on Open-Source Arabic Corpora and Processing Tools.
|
| 47 |
+
|
| 48 |
+
| Model | BLEU |
|
| 49 |
+
|------------------------------|------|
|
| 50 |
+
| AraT5-MSAizer. | 0.2302 |
|
| 51 |
+
|
| 52 |
+
|
| 53 |
+
Official evaluation results on the held-out test split
|
| 54 |
+
|
| 55 |
+
| Model | BLEU | Comet DA |
|
| 56 |
+
|----------------|--------|----------|
|
| 57 |
+
| AraT5-MSAizer | 0.2179 | 0.0016 |
|
| 58 |
+
|
| 59 |
+
|
| 60 |
+
|
| 61 |
## Training procedure
|
| 62 |
|
| 63 |
The model was trained by fully fine-tuning [UBC-NLP/AraT5v2-base-1024](https://huggingface.co/UBC-NLP/AraT5v2-base-1024) for one epoch only.
|