language:
- zh
- en
tags:
- translation
license: cc-by-4.0
datasets:
- quickmt/quickmt-train.zh-en
model-index:
- name: quickmt-zh-en
results:
- task:
name: Translation zho-eng
type: translation
args: zho-eng
dataset:
name: flores101-devtest
type: flores_101
args: zho_Hans eng_Latn devtest
metrics:
- name: BLEU
type: bleu
value: 29.36
- name: CHRF
type: chrf
value: 58.1
quickmt-zh-en Neural Machine Translation Model
quickmt-zh-en is a reasonably fast and reasonably accurate neural machine translation model for translation from zh into en.
Model Information
- Trained using
eole - 200M parameter transformer 'big' with 8 encoder layers and 2 decoder layers
- Separate source and target Sentencepiece tokenizers
- Exported for fast inference to CTranslate2 format
- Training data: https://huggingface.co/datasets/quickmt/quickmt-train.zh-en/tree/main
See the eole model configuration in this repository for further details.
Usage with quickmt
First, install quickmt and download the model
git clone https://github.com/quickmt/quickmt.git
pip install ./quickmt/
quickmt-model-download quickmt/quickmt-zh-en ./quickmt-zh-en
from quickmt import Translator
# Auto-detects GPU, set to "cpu" to force CPU inference
t = Translator("./quickmt-zh-en/", device="auto")
# Translate - set beam size to 5 for higher quality (but slower speed)
t(["他补充道:“我们现在有 4 个月大没有糖尿病的老鼠,但它们曾经得过该病。”"], beam_size=1)
# Get alternative translations by sampling
# You can pass any cTranslate2 `translate_batch` arguments
t(["他补充道:“我们现在有 4 个月大没有糖尿病的老鼠,但它们曾经得过该病。”"], sampling_temperature=1.2, beam_size=1, sampling_topk=50, sampling_topp=0.9)
The model is in ctranslate2 format, and the tokenizers are sentencepiece, so you can use ctranslate2 directly instead of through quickmt. It is also possible to get this model to work with e.g. LibreTranslate which also uses ctranslate2 and sentencepiece.
Metrics
BLEU and CHRF2 calculated with sacrebleu on the Flores200 devtest test set ("zho_Hans"->"eng_Latn"). COMET22 with the comet library and the default model. "Time (s)" is the time in seconds to translate (using ctranslate2) the flores-devtest dataset (1012 sentences) on an RTX 4070s GPU with batch size 32 except for madlad400-3b-mt which used a batch size of 1.
| Model | bleu | chrf2 | comet22 | Time (s) |
|---|---|---|---|---|
| quickmt/quickmt-zh-en | 29.36 | 58.10 | 0.8655 | 0.88 |
| Helsinki-NLP/opus-mt-zh-en | 23.35 | 53.60 | 0.8426 | 3.78 |
| facebook/m2m100_418M | 15.99 | 50.13 | 0.7881 | 16.61 |
| facebook/nllb-200-distilled-600M | 26.22 | 55.18 | 0.8507 | 20.89 |
| facebook/m2m100_1.2B | 20.30 | 54.23 | 0.8206 | 33.12 |
| facebook/nllb-200-distilled-1.3B | 28.56 | 57.35 | 0.8620 | 36.64 |
quickmt-zh-en is the fastest and highest quality.