File size: 4,396 Bytes
f12060b
 
 
 
 
 
 
 
8f2eaf8
f12060b
 
 
 
 
 
 
 
 
 
 
 
 
 
8f2eaf8
f12060b
 
8f2eaf8
f12060b
 
8f2eaf8
f12060b
 
 
8f2eaf8
f12060b
 
 
8f2eaf8
 
 
 
 
 
 
f12060b
 
 
8f2eaf8
 
 
f12060b
 
 
 
8f2eaf8
f12060b
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
8f2eaf8
f12060b
 
 
 
8f2eaf8
f12060b
 
 
 
 
 
 
8f2eaf8
f12060b
8f2eaf8
f12060b
 
 
 
8f2eaf8
f12060b
 
 
8f2eaf8
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
---
language:
- en
- ru
tags:
- translation
license: cc-by-4.0
datasets:
- quickmt/quickmt-train.ru-en-v2
model-index:
- name: quickmt-ru-en
  results:
  - task:
      name: Translation rus-eng
      type: translation
      args: rus-eng
    dataset:
      name: flores101-devtest
      type: flores_101
      args: rus_Cyrl eng_Latn devtest
    metrics:
    - name: BLEU
      type: bleu
      value: 34.69
    - name: CHRF
      type: chrf
      value: 62.31
    - name: COMET
      type: comet
      value: 85.96
---


# `quickmt-ru-en` Neural Machine Translation Model - V2

`quickmt-ru-en` is a reasonably fast and reasonably accurate neural machine translation model for translation from `ru` into `en`.

This is an updated, higher-quality model with a larger, cleaner training dataset trained for more steps. 


## Try it on our Huggingface Space

Give it a try before downloading here: https://huggingface.co/spaces/quickmt/QuickMT-Demo


## Model Information

* Trained using [`eole`](https://github.com/eole-nlp/eole) 
* 200M parameter transformer 'big' with 8 encoder layers and 2 decoder layers
* 32k separate Sentencepiece vocabs
* Exported for fast inference to [CTranslate2](https://github.com/OpenNMT/CTranslate2) format

See the `eole` model configuration in this repository for further details and the `eole-model` for the raw `eole` (pytorch) model.


## Usage with `quickmt`

You must install the Nvidia cuda toolkit first, if you want to do GPU inference.

Next, install the `quickmt` python library and download the model:

```bash
git clone https://github.com/quickmt/quickmt.git
pip install ./quickmt/

quickmt-model-download quickmt/quickmt-ru-en ./quickmt-ru-en
```

Finally use the model in python:

```python
from quickmt import Translator

# Auto-detects GPU, set to "cpu" to force CPU inference
t = Translator("./quickmt-ru-en/", device="auto")

# Translate - set beam size to 1 for faster speed (but lower quality)
sample_text = 'Dr. Ehud Ur, professor i medicin på Dalhousie University i Halifax, Nova Scotia, og formand for den kliniske og videnskabelige afdeling af Canadian Diabetes Association, advarede om at forskningen stadig er i dens tidlige stadier.'

t(sample_text, beam_size=5)
```

> 'According to Dr. Ehud Ur, professor of medicine at Dalhousie University in Halifax, Nova Scotia and chair of the clinical science department of the Canadian Diabetes Association, the research is still in its infancy.'

```python
# Get alternative translations by sampling
# You can pass any cTranslate2 `translate_batch` arguments
t([sample_text], sampling_temperature=1.2, beam_size=1, sampling_topk=50, sampling_topp=0.9)
```

> 'According to Dr. Ehud Ur, a professor of medicine at Dalhousie University in Halifax, Nova Scotia, and Chair of the Clinical Research Division of the Canadian Diabetes Association, research is still in the initial stages.'

The model is in `ctranslate2` format, and the tokenizers are `sentencepiece`, so you can use `ctranslate2` directly instead of through `quickmt`. It is also possible  to get this model to work with e.g. [LibreTranslate](https://libretranslate.com/) which also uses `ctranslate2` and `sentencepiece`. A model in safetensors format to be used with `eole` is also provided.


## Metrics

`bleu` and `chrf2` are calculated with [sacrebleu](https://github.com/mjpost/sacrebleu) on the [Flores200 `devtest` test set](https://huggingface.co/datasets/facebook/flores) ("rus_Cyrl"->"eng_Latn"). `comet22` with the [`comet`](https://github.com/Unbabel/COMET) library and the [default model](https://huggingface.co/Unbabel/wmt22-comet-da). "Time (s)" is the time in seconds to translate the flores-devtest dataset (1012 sentences) on an RTX 4070s GPU with batch size 32.

|                                  |   bleu |   chrf2 |   comet22 |   Time (s) |
|:---------------------------------|-------:|--------:|----------:|-----------:|
| quickmt/quickmt-ru-en            |  34.69 |   62.31 |     85.96 |       1.27 |
| Helsinki-NLP/opus-mt-ru-en       |  30.04 |   58.23 |     83.97 |       3.81 |
| facebook/nllb-200-distilled-600M |  34.59 |   61.26 |     85.88 |      22.07 |
| facebook/nllb-200-distilled-1.3B |  36.99 |   63.04 |     86.59 |      38.26 |
| facebook/m2m100_418M             |  26.62 |   56.31 |     81.77 |      18.7  |
| facebook/m2m100_1.2B             |  32.01 |   60.3  |     85.01 |      36.32 |