metadata
license: apache-2.0
datasets:
- Helsinki-NLP/tatoeba
- openlanguagedata/flores_plus
- facebook/bouquet
language:
- en
- fr
metrics:
- bleu
- comet
- chrf
pipeline_tag: translation
OPUS-MT-tiny-fra-eng
Distilled model from a Tatoeba-MT Teacher: OPUS-MT-models/fr-en/opus-2020-02-26, which has been trained on the Tatoeba dataset.
We used the OpusDistillery to train new a new student with the tiny architecture, with a regular transformer decoder. For training data, we used Tatoeba. The configuration file fed into OpusDistillery can be found here.
How to run
from transformers import MarianMTModel, MarianTokenizer
model_name = "Helsinki-NLP/opus-mt_tiny_fra-eng"
tokenizer = MarianTokenizer.from_pretrained(model_name)
model = MarianMTModel.from_pretrained(model_name)
tok = tokenizer("Les efforts visant à trouver le lieu de l’accident sont restreints par des intempéries et le terrain accidenté.", return_tensors="pt").input_ids
output = model.generate(tok)[0]
tokenizer.decode(output, skip_special_tokens=True)
Benchmarks
Teacher
| testset | BLEU | chr-F | COMET |
|---|---|---|---|
| Flores+ | 41.8 | 66.9 | 0.8689 |
| Bouquet | 43.5 | 64.5 | 0.875 |
Student
| testset | BLEU | chr-F | COMET |
|---|---|---|---|
| Flores+ | 40.4 | 65.9 | 0.8734 |
| Bouquet | 39.9 | 61.9 | 0.8551 |