Translation
English
Chinese
Eval Results (legacy)
radinplaid commited on
Commit
2eea1bd
·
verified ·
1 Parent(s): e1fabc3

Delete .ipynb_checkpoints

Browse files
.ipynb_checkpoints/README-checkpoint.md DELETED
@@ -1,120 +0,0 @@
1
- ---
2
- language:
3
- - en
4
- - zh
5
- tags:
6
- - translation
7
- license: cc-by-4.0
8
- datasets:
9
- - quickmt/quickmt-train.zh-en
10
- - quickmt/madlad400-en-backtranslated-zh
11
- - quickmt/newscrawl2024-en-backtranslated-zh
12
- model-index:
13
- - name: quickmt-zh-en
14
- results:
15
- - task:
16
- name: Translation zho-eng
17
- type: translation
18
- args: zho-eng
19
- dataset:
20
- name: flores101-devtest
21
- type: flores_101
22
- args: zho_Hans eng_Latn devtest
23
- metrics:
24
- - name: BLEU
25
- type: bleu
26
- value: 30.0
27
- - name: CHRF
28
- type: chrf
29
- value: 58.42
30
- - name: COMET
31
- type: comet
32
- value: 86.72
33
- ---
34
-
35
-
36
- # `quickmt-zh-en` Neural Machine Translation Model
37
-
38
- `quickmt-zh-en` is a reasonably fast and reasonably accurate neural machine translation model for translation from `zh` into `en`.
39
-
40
- `quickmt` models are roughly 3 times faster for GPU inference than OpusMT models and roughly [40 times](https://huggingface.co/spaces/quickmt/quickmt-vs-libretranslate) faster than [LibreTranslate](https://huggingface.co/spaces/quickmt/quickmt-vs-libretranslate)/[ArgosTranslate](github.com/argosopentech/argos-translate).
41
-
42
-
43
- ## *UPDATED VERSION!*
44
-
45
- This model was trained with back-translated data and has improved translation quality!
46
-
47
- * https://huggingface.co/datasets/quickmt/madlad400-en-backtranslated-zh
48
- * https://huggingface.co/datasets/quickmt/newscrawl2024-en-backtranslated-zh
49
-
50
-
51
- ## Try it on our Huggingface Space
52
-
53
- Give it a try before downloading here: https://huggingface.co/spaces/quickmt/QuickMT-Demo
54
-
55
-
56
- ## Model Information
57
-
58
- * Trained using [`eole`](https://github.com/eole-nlp/eole)
59
- * 200M parameter seq2seq transformer
60
- * 32k separate Sentencepiece vocabs
61
- * Exported for fast inference to [CTranslate2](https://github.com/OpenNMT/CTranslate2) format
62
- * The pytorch model (for use with [`eole`](https://github.com/eole-nlp/eole)) is available in this repository in the `eole-model` folder
63
-
64
- See the `eole` model configuration in this repository for further details and the `eole-model` for the raw `eole` (pytorch) model.
65
-
66
-
67
- ## Usage with `quickmt`
68
-
69
- You must install the Nvidia cuda toolkit first, if you want to do GPU inference.
70
-
71
- Next, install the `quickmt` python library and download the model:
72
-
73
- ```bash
74
- git clone https://github.com/quickmt/quickmt.git
75
- pip install -e ./quickmt/
76
-
77
- quickmt-model-download quickmt/quickmt-zh-en ./quickmt-zh-en
78
- ```
79
-
80
- Finally use the model in python:
81
-
82
- ```python
83
- from quickmt import Translator
84
-
85
- # Auto-detects GPU, set to "cpu" to force CPU inference
86
- t = Translator("./quickmt-zh-en/", device="auto")
87
-
88
- # Translate - set beam size to 1 for faster speed (but lower quality)
89
- sample_text = '埃胡德·乌尔博士(新斯科舍省哈利法克斯市达尔豪西大学医学教授,加拿大糖尿病协会临床与科学部门教授)提醒,这项研究仍处在早期阶段。'
90
-
91
- t(sample_text, beam_size=5)
92
- ```
93
-
94
- > 'Dr. Ehud Ur (Professor of Medicine, Dalhousie University, Halifax, Nova Scotia, and Professor of Clinical and Scientific Division, Canadian Diabetes Association) cautions that the study is still at an early stage.'
95
-
96
- ```python
97
- # Get alternative translations by sampling
98
- # You can pass any cTranslate2 `translate_batch` arguments
99
- t([sample_text], sampling_temperature=1.2, beam_size=1, sampling_topk=50, sampling_topp=0.9)
100
- ```
101
-
102
- > 'Dr Elhoud (Professor of Medicine at Dalhousie University, Halifax, Nova Scotia, and professor of clinical and scientific Division of the Canadian Diabetes Association) cautions that the study is still at an early stage.'
103
-
104
- The model is in `ctranslate2` format, and the tokenizers are `sentencepiece`, so you can use `ctranslate2` directly instead of through `quickmt`. It is also possible to get this model to work with e.g. [LibreTranslate](https://libretranslate.com/) which also uses `ctranslate2` and `sentencepiece`. A model in safetensors format to be used with `eole` is also provided.
105
-
106
-
107
- ## Metrics
108
-
109
- `bleu` and `chrf2` are calculated with [sacrebleu](https://github.com/mjpost/sacrebleu) on the [Flores200 `devtest` test set](https://huggingface.co/datasets/facebook/flores) ("zho_Hans"->"eng_Latn"). `comet22` with the [`comet`](https://github.com/Unbabel/COMET) library and the [default model](https://huggingface.co/Unbabel/wmt22-comet-da). "Time (s)" is the time in seconds to translate the flores-devtest dataset (1012 sentences) on an RTX 4070s GPU with batch size 32.
110
-
111
-
112
- | | bleu | chrf2 | comet22 | Time (s) |
113
- |:---------------------------------|-------:|--------:|----------:|-----------:|
114
- | quickmt/quickmt-zh-en | 30.0 | 58.42 | 86.72 | 1.10 |
115
- | Helsinki-NLP/opus-mt-zh-en | 22.99 | 53.98 | 84.6 | 3.73 |
116
- | facebook/nllb-200-distilled-600M | 26.02 | 55.27 | 85.1 | 21.69 |
117
- | facebook/nllb-200-distilled-1.3B | 28.61 | 57.43 | 86.22 | 37.55 |
118
- | facebook/m2m100_418M | 19.55 | 50.83 | 82.04 | 18.2 |
119
- | facebook/m2m100_1.2B | 24.9 | 54.89 | 85.1 | 35.49 |
120
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
.ipynb_checkpoints/eole-config-checkpoint.yaml DELETED
@@ -1,111 +0,0 @@
1
- ## IO
2
- save_data: data
3
- overwrite: True
4
- seed: 1234
5
- report_every: 100
6
- valid_metrics: ["BLEU"]
7
- tensorboard: true
8
- tensorboard_log_dir: tensorboard_small
9
-
10
- ### Vocab
11
- src_vocab: zh.eole.vocab
12
- tgt_vocab: en.eole.vocab
13
- src_vocab_size: 32000
14
- tgt_vocab_size: 32000
15
- vocab_size_multiple: 8
16
- share_vocab: false
17
- n_sample: 0
18
-
19
- data:
20
- corpus_1:
21
- path_src: hf://quickmt/quickmt-train.zh-en/zh
22
- path_tgt: hf://quickmt/quickmt-train.zh-en/en
23
- path_sco: hf://quickmt/quickmt-train.zh-en/sco
24
- weight: 2
25
- corpus_2:
26
- path_src: hf://quickmt/newscrawl2024-en-backtranslated-zh/zh
27
- path_tgt: hf://quickmt/newscrawl2024-en-backtranslated-zh/en
28
- path_sco: hf://quickmt/newscrawl2024-en-backtranslated-zh/sco
29
- weight: 1
30
- corpus_3:
31
- path_src: hf://quickmt/madlad400-en-backtranslated-zh/zh
32
- path_tgt: hf://quickmt/madlad400-en-backtranslated-zh/en
33
- path_sco: hf://quickmt/madlad400-en-backtranslated-zh/sco
34
- weight: 2
35
- valid:
36
- path_src: valid.zh
37
- path_tgt: valid.en
38
-
39
- transforms: [sentencepiece, filtertoolong]
40
- transforms_configs:
41
- sentencepiece:
42
- src_subword_model: "zh.spm.model"
43
- tgt_subword_model: "en.spm.model"
44
- filtertoolong:
45
- src_seq_length: 256
46
- tgt_seq_length: 256
47
-
48
- training:
49
- # Run configuration
50
- model_path: quickmt-zh-en-eole-model
51
- keep_checkpoint: 4
52
- train_steps: 200000
53
- save_checkpoint_steps: 5000
54
- valid_steps: 5000
55
-
56
- # Train on a single GPU
57
- world_size: 1
58
- gpu_ranks: [0]
59
-
60
- # Batching 10240
61
- batch_type: "tokens"
62
- batch_size: 6000
63
- valid_batch_size: 2048
64
- batch_size_multiple: 8
65
- accum_count: [20]
66
- accum_steps: [0]
67
-
68
- # Optimizer & Compute
69
- compute_dtype: "fp16"
70
- optim: "adamw"
71
- #use_amp: False
72
- learning_rate: 3.0
73
- warmup_steps: 5000
74
- decay_method: "noam"
75
- adam_beta2: 0.998
76
-
77
- # Data loading
78
- bucket_size: 256000
79
- num_workers: 4
80
- prefetch_factor: 64
81
-
82
- # Hyperparams
83
- dropout_steps: [0]
84
- dropout: [0.1]
85
- attention_dropout: [0.1]
86
- max_grad_norm: 0
87
- label_smoothing: 0.1
88
- average_decay: 0.0001
89
- param_init_method: xavier_uniform
90
- normalization: "tokens"
91
-
92
- model:
93
- architecture: "transformer"
94
- share_embeddings: false
95
- share_decoder_embeddings: false
96
- add_estimator: false
97
- add_ffnbias: true
98
- add_qkvbias: false
99
- layer_norm: standard
100
- mlp_activation_fn: gelu
101
- hidden_size: 768
102
- encoder:
103
- layers: 12
104
- decoder:
105
- layers: 2
106
- heads: 8
107
- transformer_ff: 4096
108
- embeddings:
109
- word_vec_size: 768
110
- position_encoding_type: "SinusoidalInterleaved"
111
-