Translation
Persian
English
Eval Results
radinplaid commited on
Commit
4bba198
·
verified ·
1 Parent(s): 094fc6f

Upload folder using huggingface_hub

Browse files
.ipynb_checkpoints/README-checkpoint.md CHANGED
@@ -1,12 +1,14 @@
1
  ---
2
  language:
3
- - en
4
  - fa
 
5
  tags:
6
  - translation
7
  license: cc-by-4.0
8
  datasets:
9
  - quickmt/quickmt-train.fa-en
 
 
10
  model-index:
11
  - name: quickmt-fa-en
12
  results:
@@ -21,13 +23,13 @@ model-index:
21
  metrics:
22
  - name: BLEU
23
  type: bleu
24
- value: 37.57
25
  - name: CHRF
26
  type: chrf
27
- value: 63.37
28
  - name: COMET
29
  type: comet
30
- value: 87.76
31
  ---
32
 
33
 
@@ -35,17 +37,30 @@ model-index:
35
 
36
  `quickmt-fa-en` is a reasonably fast and reasonably accurate neural machine translation model for translation from `fa` into `en`.
37
 
 
 
 
 
 
 
 
 
 
 
 
 
38
 
39
  ## Model Information
40
 
41
- * Trained using [`eole`](https://github.com/eole-nlp/eole)
42
- * 185M parameter transformer 'big' with 8 encoder layers and 2 decoder layers
43
- * 50k joint Sentencepiece vocabulary
44
- * Expested for fast inference to [CTranslate2](https://github.com/OpenNMT/CTranslate2) format
45
- * Training data: https://huggingface.co/datasets/quickmt/quickmt-train.fa-en/tree/main
46
 
47
  See the `eole` model configuration in this repository for further details and the `eole-model` for the raw `eole` (pytorch) model.
48
 
 
49
  ## Usage with `quickmt`
50
 
51
  You must install the Nvidia cuda toolkit first, if you want to do GPU inference.
@@ -54,7 +69,7 @@ Next, install the `quickmt` python library and download the model:
54
 
55
  ```bash
56
  git clone https://github.com/quickmt/quickmt.git
57
- pip install ./quickmt/
58
 
59
  quickmt-model-download quickmt/quickmt-fa-en ./quickmt-fa-en
60
  ```
@@ -62,37 +77,43 @@ quickmt-model-download quickmt/quickmt-fa-en ./quickmt-fa-en
62
  Finally use the model in python:
63
 
64
  ```python
65
- from quickmt impest Translator
66
 
67
  # Auto-detects GPU, set to "cpu" to force CPU inference
68
- t = Translator("./quickmt-fa-en/", device="auto")
69
 
70
  # Translate - set beam size to 1 for faster speed (but lower quality)
71
  sample_text = 'دکتر ایهود اور، استاد پزشکی دانشگاه دالهاوزی در هلیفکس، نوااسکوشیا و رئیس بخش کلینیکی و علمی انجمن دیابت کانادا هشدار داد که این تحقیق هنوز در روزهای آغازین خود می\u200cباشد.'
72
 
73
- t(sample_text, beam_size=5)
74
  ```
75
 
76
- > 'Dr. Ehudover, a professor of medicine at Dalhousie University in Halifax, Nova Scotia and head of the clinical and scientific division of the Canadian Diabetes Association, warned that the study was still in its early days.'
77
 
78
  ```python
79
  # Get alternative translations by sampling
80
  # You can pass any cTranslate2 `translate_batch` arguments
81
- t([sample_text], sampling_temperature=1.2, beam_size=1, sampling_topk=50, sampling_topp=0.9)
82
  ```
83
 
84
- > 'Dr. Ehudver, professor of medicine at Dalhousie University in Halifax, Nova Scotia and head of the Clinical and Scientific Section of the Canadian Diabetes Society, cautioned the new study was still very early.'
 
 
85
 
86
- The model is in `ctranslate2` format, and the tokenizers are `sentencepiece`, so you can use `ctranslate2` directly instead of through `quickmt`. It is also possible to get this model to work with e.g. [LibreTranslate](https://libretranslate.com/) which also uses `ctranslate2` and `sentencepiece`.
87
 
88
  ## Metrics
89
 
90
- `bleu` and `chrf2` are calculated with [sacrebleu](https://github.com/mjpost/sacrebleu) on the [Flores200 `devtest` test set](https://huggingface.co/datasets/facebook/flores) ("pes_Arab"->"eng_Latn"). `comet22` with the [`comet`](https://github.com/Unbabel/COMET) library and the [default model](https://huggingface.co/Unbabel/wmt22-comet-da). "Time (s)" is the time in seconds to translate the flores-devtest dataset (1012 sentences) on an RTX 4070s GPU with batch size 32 (faster speed is possible using a larger batch size).
 
 
 
 
 
 
 
 
 
 
 
 
91
 
92
- | | bleu | chrf2 | comet22 | Time (s) |
93
- |:---------------------------------|-------:|--------:|----------:|-----------:|
94
- | quickmt-fa-en | 37.57 | 63.37 | 87.76 | 1.16 |
95
- | facebook/nllb-200-distilled-600M | 34.79 | 60.86 | 86.49 | 21.17 |
96
- | facebook/nllb-200-distilled-1.3B | 37.91 | 63.39 | 87.82 | 36.9 |
97
- | facebook/m2m100_418M | 27.2 | 55.82 | 82.9 | 18.24 |
98
- | facebook/m2m100_1.2B | 29.12 | 56.39 | 83.5 | 35.14 |
 
1
  ---
2
  language:
 
3
  - fa
4
+ - en
5
  tags:
6
  - translation
7
  license: cc-by-4.0
8
  datasets:
9
  - quickmt/quickmt-train.fa-en
10
+ - quickmt/madlad400-en-backtranslated-fa
11
+ - quickmt/newscrawl2024-en-backtranslated-fa
12
  model-index:
13
  - name: quickmt-fa-en
14
  results:
 
23
  metrics:
24
  - name: BLEU
25
  type: bleu
26
+ value: 38.99
27
  - name: CHRF
28
  type: chrf
29
+ value: 64.55
30
  - name: COMET
31
  type: comet
32
+ value: 88.14
33
  ---
34
 
35
 
 
37
 
38
  `quickmt-fa-en` is a reasonably fast and reasonably accurate neural machine translation model for translation from `fa` into `en`.
39
 
40
+ `quickmt` models are roughly 3 times faster for GPU inference than OpusMT models and roughly [40 times](https://huggingface.co/spaces/quickmt/quickmt-vs-libretranslate) faster than [LibreTranslate](https://huggingface.co/spaces/quickmt/quickmt-vs-libretranslate)/[ArgosTranslate](github.com/argosopentech/argos-translate).
41
+
42
+
43
+ ## *UPDATED VERSION!*
44
+
45
+ This model was trained with back-translated data and has improved translation quality!
46
+
47
+
48
+ ## Try it on our Huggingface Space
49
+
50
+ Give it a try before downloading here: https://huggingface.co/spaces/quickmt/QuickMT-Demo
51
+
52
 
53
  ## Model Information
54
 
55
+ * Trained using [`eole`](https://github.com/eole-nlp/eole)
56
+ * 200M parameter seq2seq transformer
57
+ * 32k separate Sentencepiece vocabs
58
+ * Exported for fast inference to [CTranslate2](https://github.com/OpenNMT/CTranslate2) format
59
+ * The pytorch model (for use with [`eole`](https://github.com/eole-nlp/eole)) is available in this repository in the `eole-model` folder
60
 
61
  See the `eole` model configuration in this repository for further details and the `eole-model` for the raw `eole` (pytorch) model.
62
 
63
+
64
  ## Usage with `quickmt`
65
 
66
  You must install the Nvidia cuda toolkit first, if you want to do GPU inference.
 
69
 
70
  ```bash
71
  git clone https://github.com/quickmt/quickmt.git
72
+ pip install -e ./quickmt/
73
 
74
  quickmt-model-download quickmt/quickmt-fa-en ./quickmt-fa-en
75
  ```
 
77
  Finally use the model in python:
78
 
79
  ```python
80
+ from quickmt import Translator
81
 
82
  # Auto-detects GPU, set to "cpu" to force CPU inference
83
+ mt = Translator("./quickmt-ar-en/", device="auto")
84
 
85
  # Translate - set beam size to 1 for faster speed (but lower quality)
86
  sample_text = 'دکتر ایهود اور، استاد پزشکی دانشگاه دالهاوزی در هلیفکس، نوااسکوشیا و رئیس بخش کلینیکی و علمی انجمن دیابت کانادا هشدار داد که این تحقیق هنوز در روزهای آغازین خود می\u200cباشد.'
87
 
88
+ mt(sample_text, beam_size=5)
89
  ```
90
 
91
+ > 'Dr. Ehud Orr, a professor of medicine at Dalhousie University in Halifax, Nova Scotia, and head of the Canadian Diabetes Association’s clinical and scientific department, warned that the research is still in its early days.'
92
 
93
  ```python
94
  # Get alternative translations by sampling
95
  # You can pass any cTranslate2 `translate_batch` arguments
96
+ mt([sample_text], sampling_temperature=1.2, beam_size=1, sampling_topk=50, sampling_topp=0.9)
97
  ```
98
 
99
+ > 'Dr. Ehud Orr, medical professor of Dalhousie University in Halifax, Nova Scotia and head of the Clinical and Scientific Section of the Canadian Diabetes Association warned that the research is still in its early days.'
100
+
101
+ The model is in `ctranslate2` format, and the tokenizers are `sentencepiece`, so you can use `ctranslate2` directly instead of through `quickmt`. It is also possible to get this model to work with e.g. [LibreTranslate](https://libretranslate.com/) which also uses `ctranslate2` and `sentencepiece`. A model in safetensors format to be used with `eole` is also provided.
102
 
 
103
 
104
  ## Metrics
105
 
106
+ `bleu` and `chrf2` are calculated with [sacrebleu](https://github.com/mjpost/sacrebleu) on the [Flores200 `devtest` test set](https://huggingface.co/datasets/facebook/flores). `comet22` with the [`comet`](https://github.com/Unbabel/COMET) library and the [default model](https://huggingface.co/Unbabel/wmt22-comet-da). "Time (s)" is the time in seconds to translate the flores-devtest dataset (1012 sentences) on an RTX 4070s GPU with batch size 32.
107
+
108
+
109
+ | | bleu | chrf2 | comet22 | Time (s) |
110
+ |:--------------------------------------|-------:|--------:|----------:|-----------:|
111
+ | quickmt/quickmt-fa-en | 38.99 | 64.55 | 88.14 | 1.1 |
112
+ | facebook/nllb-200-distilled-600M | 34.8 | 60.86 | 86.49 | 21.13 |
113
+ | facebook/nllb-200-distilled-1.3B | 37.91 | 63.39 | 87.82 | 36.86 |
114
+ | facebook/m2m100_418M | 27.2 | 55.82 | 82.9 | 18.23 |
115
+ | facebook/m2m100_1.2B | 29.13 | 56.4 | 83.5 | 34.8 |
116
+ | tencent/HY-MT1.5-1.8B | 20.87 | 55.02 | 86.22 | 10.0 |
117
+ | tencent/HY-MT1.5-7B-FP8 | 28.16 | 59.49 | 88.07 | 36.0 |
118
+ | CohereLabs/aya-expanse-8b (bnb quant) | 35.29 | 62.46 | 88.43 | 77.37 |
119
 
 
 
 
 
 
 
 
README.md CHANGED
@@ -1,12 +1,14 @@
1
  ---
2
  language:
3
- - en
4
  - fa
 
5
  tags:
6
  - translation
7
  license: cc-by-4.0
8
  datasets:
9
  - quickmt/quickmt-train.fa-en
 
 
10
  model-index:
11
  - name: quickmt-fa-en
12
  results:
@@ -21,13 +23,13 @@ model-index:
21
  metrics:
22
  - name: BLEU
23
  type: bleu
24
- value: 37.57
25
  - name: CHRF
26
  type: chrf
27
- value: 63.37
28
  - name: COMET
29
  type: comet
30
- value: 87.76
31
  ---
32
 
33
 
@@ -35,17 +37,30 @@ model-index:
35
 
36
  `quickmt-fa-en` is a reasonably fast and reasonably accurate neural machine translation model for translation from `fa` into `en`.
37
 
 
 
 
 
 
 
 
 
 
 
 
 
38
 
39
  ## Model Information
40
 
41
- * Trained using [`eole`](https://github.com/eole-nlp/eole)
42
- * 185M parameter transformer 'big' with 8 encoder layers and 2 decoder layers
43
- * 50k joint Sentencepiece vocabulary
44
- * Expested for fast inference to [CTranslate2](https://github.com/OpenNMT/CTranslate2) format
45
- * Training data: https://huggingface.co/datasets/quickmt/quickmt-train.fa-en/tree/main
46
 
47
  See the `eole` model configuration in this repository for further details and the `eole-model` for the raw `eole` (pytorch) model.
48
 
 
49
  ## Usage with `quickmt`
50
 
51
  You must install the Nvidia cuda toolkit first, if you want to do GPU inference.
@@ -54,7 +69,7 @@ Next, install the `quickmt` python library and download the model:
54
 
55
  ```bash
56
  git clone https://github.com/quickmt/quickmt.git
57
- pip install ./quickmt/
58
 
59
  quickmt-model-download quickmt/quickmt-fa-en ./quickmt-fa-en
60
  ```
@@ -65,34 +80,40 @@ Finally use the model in python:
65
  from quickmt import Translator
66
 
67
  # Auto-detects GPU, set to "cpu" to force CPU inference
68
- t = Translator("./quickmt-fa-en/", device="auto")
69
 
70
  # Translate - set beam size to 1 for faster speed (but lower quality)
71
  sample_text = 'دکتر ایهود اور، استاد پزشکی دانشگاه دالهاوزی در هلیفکس، نوااسکوشیا و رئیس بخش کلینیکی و علمی انجمن دیابت کانادا هشدار داد که این تحقیق هنوز در روزهای آغازین خود می\u200cباشد.'
72
 
73
- t(sample_text, beam_size=5)
74
  ```
75
 
76
- > 'Dr. Ehudover, a professor of medicine at Dalhousie University in Halifax, Nova Scotia and head of the clinical and scientific division of the Canadian Diabetes Association, warned that the study was still in its early days.'
77
 
78
  ```python
79
  # Get alternative translations by sampling
80
  # You can pass any cTranslate2 `translate_batch` arguments
81
- t([sample_text], sampling_temperature=1.2, beam_size=1, sampling_topk=50, sampling_topp=0.9)
82
  ```
83
 
84
- > 'Dr. Ehudver, professor of medicine at Dalhousie University in Halifax, Nova Scotia and head of the Clinical and Scientific Section of the Canadian Diabetes Society, cautioned the new study was still very early.'
 
 
85
 
86
- The model is in `ctranslate2` format, and the tokenizers are `sentencepiece`, so you can use `ctranslate2` directly instead of through `quickmt`. It is also possible to get this model to work with e.g. [LibreTranslate](https://libretranslate.com/) which also uses `ctranslate2` and `sentencepiece`.
87
 
88
  ## Metrics
89
 
90
- `bleu` and `chrf2` are calculated with [sacrebleu](https://github.com/mjpost/sacrebleu) on the [Flores200 `devtest` test set](https://huggingface.co/datasets/facebook/flores) ("pes_Arab"->"eng_Latn"). `comet22` with the [`comet`](https://github.com/Unbabel/COMET) library and the [default model](https://huggingface.co/Unbabel/wmt22-comet-da). "Time (s)" is the time in seconds to translate the flores-devtest dataset (1012 sentences) on an RTX 4070s GPU with batch size 32 (faster speed is possible using a larger batch size).
 
 
 
 
 
 
 
 
 
 
 
 
91
 
92
- | | bleu | chrf2 | comet22 | Time (s) |
93
- |:---------------------------------|-------:|--------:|----------:|-----------:|
94
- | quickmt-fa-en | 37.57 | 63.37 | 87.76 | 1.16 |
95
- | facebook/nllb-200-distilled-600M | 34.79 | 60.86 | 86.49 | 21.17 |
96
- | facebook/nllb-200-distilled-1.3B | 37.91 | 63.39 | 87.82 | 36.9 |
97
- | facebook/m2m100_418M | 27.2 | 55.82 | 82.9 | 18.24 |
98
- | facebook/m2m100_1.2B | 29.12 | 56.39 | 83.5 | 35.14 |
 
1
  ---
2
  language:
 
3
  - fa
4
+ - en
5
  tags:
6
  - translation
7
  license: cc-by-4.0
8
  datasets:
9
  - quickmt/quickmt-train.fa-en
10
+ - quickmt/madlad400-en-backtranslated-fa
11
+ - quickmt/newscrawl2024-en-backtranslated-fa
12
  model-index:
13
  - name: quickmt-fa-en
14
  results:
 
23
  metrics:
24
  - name: BLEU
25
  type: bleu
26
+ value: 38.99
27
  - name: CHRF
28
  type: chrf
29
+ value: 64.55
30
  - name: COMET
31
  type: comet
32
+ value: 88.14
33
  ---
34
 
35
 
 
37
 
38
  `quickmt-fa-en` is a reasonably fast and reasonably accurate neural machine translation model for translation from `fa` into `en`.
39
 
40
+ `quickmt` models are roughly 3 times faster for GPU inference than OpusMT models and roughly [40 times](https://huggingface.co/spaces/quickmt/quickmt-vs-libretranslate) faster than [LibreTranslate](https://huggingface.co/spaces/quickmt/quickmt-vs-libretranslate)/[ArgosTranslate](github.com/argosopentech/argos-translate).
41
+
42
+
43
+ ## *UPDATED VERSION!*
44
+
45
+ This model was trained with back-translated data and has improved translation quality!
46
+
47
+
48
+ ## Try it on our Huggingface Space
49
+
50
+ Give it a try before downloading here: https://huggingface.co/spaces/quickmt/QuickMT-Demo
51
+
52
 
53
  ## Model Information
54
 
55
+ * Trained using [`eole`](https://github.com/eole-nlp/eole)
56
+ * 200M parameter seq2seq transformer
57
+ * 32k separate Sentencepiece vocabs
58
+ * Exported for fast inference to [CTranslate2](https://github.com/OpenNMT/CTranslate2) format
59
+ * The pytorch model (for use with [`eole`](https://github.com/eole-nlp/eole)) is available in this repository in the `eole-model` folder
60
 
61
  See the `eole` model configuration in this repository for further details and the `eole-model` for the raw `eole` (pytorch) model.
62
 
63
+
64
  ## Usage with `quickmt`
65
 
66
  You must install the Nvidia cuda toolkit first, if you want to do GPU inference.
 
69
 
70
  ```bash
71
  git clone https://github.com/quickmt/quickmt.git
72
+ pip install -e ./quickmt/
73
 
74
  quickmt-model-download quickmt/quickmt-fa-en ./quickmt-fa-en
75
  ```
 
80
  from quickmt import Translator
81
 
82
  # Auto-detects GPU, set to "cpu" to force CPU inference
83
+ mt = Translator("./quickmt-ar-en/", device="auto")
84
 
85
  # Translate - set beam size to 1 for faster speed (but lower quality)
86
  sample_text = 'دکتر ایهود اور، استاد پزشکی دانشگاه دالهاوزی در هلیفکس، نوااسکوشیا و رئیس بخش کلینیکی و علمی انجمن دیابت کانادا هشدار داد که این تحقیق هنوز در روزهای آغازین خود می\u200cباشد.'
87
 
88
+ mt(sample_text, beam_size=5)
89
  ```
90
 
91
+ > 'Dr. Ehud Orr, a professor of medicine at Dalhousie University in Halifax, Nova Scotia, and head of the Canadian Diabetes Association’s clinical and scientific department, warned that the research is still in its early days.'
92
 
93
  ```python
94
  # Get alternative translations by sampling
95
  # You can pass any cTranslate2 `translate_batch` arguments
96
+ mt([sample_text], sampling_temperature=1.2, beam_size=1, sampling_topk=50, sampling_topp=0.9)
97
  ```
98
 
99
+ > 'Dr. Ehud Orr, medical professor of Dalhousie University in Halifax, Nova Scotia and head of the Clinical and Scientific Section of the Canadian Diabetes Association warned that the research is still in its early days.'
100
+
101
+ The model is in `ctranslate2` format, and the tokenizers are `sentencepiece`, so you can use `ctranslate2` directly instead of through `quickmt`. It is also possible to get this model to work with e.g. [LibreTranslate](https://libretranslate.com/) which also uses `ctranslate2` and `sentencepiece`. A model in safetensors format to be used with `eole` is also provided.
102
 
 
103
 
104
  ## Metrics
105
 
106
+ `bleu` and `chrf2` are calculated with [sacrebleu](https://github.com/mjpost/sacrebleu) on the [Flores200 `devtest` test set](https://huggingface.co/datasets/facebook/flores). `comet22` with the [`comet`](https://github.com/Unbabel/COMET) library and the [default model](https://huggingface.co/Unbabel/wmt22-comet-da). "Time (s)" is the time in seconds to translate the flores-devtest dataset (1012 sentences) on an RTX 4070s GPU with batch size 32.
107
+
108
+
109
+ | | bleu | chrf2 | comet22 | Time (s) |
110
+ |:--------------------------------------|-------:|--------:|----------:|-----------:|
111
+ | quickmt/quickmt-fa-en | 38.99 | 64.55 | 88.14 | 1.1 |
112
+ | facebook/nllb-200-distilled-600M | 34.8 | 60.86 | 86.49 | 21.13 |
113
+ | facebook/nllb-200-distilled-1.3B | 37.91 | 63.39 | 87.82 | 36.86 |
114
+ | facebook/m2m100_418M | 27.2 | 55.82 | 82.9 | 18.23 |
115
+ | facebook/m2m100_1.2B | 29.13 | 56.4 | 83.5 | 34.8 |
116
+ | tencent/HY-MT1.5-1.8B | 20.87 | 55.02 | 86.22 | 10.0 |
117
+ | tencent/HY-MT1.5-7B-FP8 | 28.16 | 59.49 | 88.07 | 36.0 |
118
+ | CohereLabs/aya-expanse-8b (bnb quant) | 35.29 | 62.46 | 88.43 | 77.37 |
119
 
 
 
 
 
 
 
 
eole-config.yaml CHANGED
@@ -8,30 +8,36 @@ tensorboard: true
8
  tensorboard_log_dir: tensorboard
9
 
10
  ### Vocab
11
- src_vocab: fa.eole.vocab
12
- tgt_vocab: en.eole.vocab
13
- src_vocab_size: 20000
14
- tgt_vocab_size: 20000
15
  vocab_size_multiple: 8
16
  share_vocab: false
17
  n_sample: 0
18
 
19
  data:
20
  corpus_1:
21
- # path_src: hf://quickmt/quickmt-train.id-en/id
22
- # path_tgt: hf://quickmt/quickmt-train.id-en/en
23
- # path_sco: hf://quickmt/quickmt-train.id-en/sco
24
- path_src: train.fa
25
- path_tgt: train.en
 
 
 
 
 
 
26
  valid:
27
- path_src: dev.fa
28
- path_tgt: dev.en
29
 
30
  transforms: [sentencepiece, filtertoolong]
31
  transforms_configs:
32
  sentencepiece:
33
- src_subword_model: "fa.spm.model"
34
- tgt_subword_model: "en.spm.model"
35
  filtertoolong:
36
  src_seq_length: 256
37
  tgt_seq_length: 256
@@ -39,9 +45,8 @@ transforms_configs:
39
  training:
40
  # Run configuration
41
  model_path: quickmt-fa-en-eole-model
42
- #train_from: model
43
  keep_checkpoint: 4
44
- train_steps: 100000
45
  save_checkpoint_steps: 5000
46
  valid_steps: 5000
47
 
@@ -49,27 +54,28 @@ training:
49
  world_size: 1
50
  gpu_ranks: [0]
51
 
52
- # Batching 10240
 
53
  batch_type: "tokens"
54
- batch_size: 8000
55
- valid_batch_size: 4096
56
  batch_size_multiple: 8
57
- accum_count: [10]
58
  accum_steps: [0]
59
 
60
  # Optimizer & Compute
61
  compute_dtype: "fp16"
62
  optim: "adamw"
63
- #use_amp: False
64
- learning_rate: 2.0
65
- warmup_steps: 4000
66
  decay_method: "noam"
67
  adam_beta2: 0.998
68
 
69
  # Data loading
70
- bucket_size: 128000
71
  num_workers: 4
72
- prefetch_factor: 32
73
 
74
  # Hyperparams
75
  dropout_steps: [0]
@@ -85,14 +91,20 @@ model:
85
  architecture: "transformer"
86
  share_embeddings: false
87
  share_decoder_embeddings: false
88
- hidden_size: 1024
 
 
 
 
 
89
  encoder:
90
- layers: 8
91
  decoder:
92
  layers: 2
93
- heads: 8
94
  transformer_ff: 4096
95
  embeddings:
96
- word_vec_size: 1024
97
  position_encoding_type: "SinusoidalInterleaved"
98
 
 
 
8
  tensorboard_log_dir: tensorboard
9
 
10
  ### Vocab
11
+ src_vocab: faen/fa.eole.vocab
12
+ tgt_vocab: faen/en.eole.vocab
13
+ src_vocab_size: 32000
14
+ tgt_vocab_size: 32000
15
  vocab_size_multiple: 8
16
  share_vocab: false
17
  n_sample: 0
18
 
19
  data:
20
  corpus_1:
21
+ path_src: faen/train.cleaned.filtered.fa
22
+ path_tgt: faen/train.cleaned.filtered.en
23
+ weight: 2
24
+ corpus_2:
25
+ path_src: /home/mark/mt/data/newscrawl.backtrans.fa
26
+ path_tgt: /home/mark/mt/data/newscrawl.2024.en
27
+ weight: 1
28
+ corpus_3:
29
+ path_src: /home/mark/mt/data/madlad.backtrans.fa
30
+ path_tgt: /home/mark/mt/data/madlad.en
31
+ weight: 2
32
  valid:
33
+ path_src: faen/dev.fa
34
+ path_tgt: faen/dev.en
35
 
36
  transforms: [sentencepiece, filtertoolong]
37
  transforms_configs:
38
  sentencepiece:
39
+ src_subword_model: "faen/fa.spm.model"
40
+ tgt_subword_model: "faen/en.spm.model"
41
  filtertoolong:
42
  src_seq_length: 256
43
  tgt_seq_length: 256
 
45
  training:
46
  # Run configuration
47
  model_path: quickmt-fa-en-eole-model
 
48
  keep_checkpoint: 4
49
+ train_steps: 200000
50
  save_checkpoint_steps: 5000
51
  valid_steps: 5000
52
 
 
54
  world_size: 1
55
  gpu_ranks: [0]
56
 
57
+ # Batching 120,000 tokens
58
+ # For RTX 5090, 15000 batch size, accum_count 8
59
  batch_type: "tokens"
60
+ batch_size: 6000
61
+ valid_batch_size: 2048
62
  batch_size_multiple: 8
63
+ accum_count: [20]
64
  accum_steps: [0]
65
 
66
  # Optimizer & Compute
67
  compute_dtype: "fp16"
68
  optim: "adamw"
69
+ #use_amp: True
70
+ learning_rate: 3.0
71
+ warmup_steps: 5000
72
  decay_method: "noam"
73
  adam_beta2: 0.998
74
 
75
  # Data loading
76
+ bucket_size: 256000
77
  num_workers: 4
78
+ prefetch_factor: 128
79
 
80
  # Hyperparams
81
  dropout_steps: [0]
 
91
  architecture: "transformer"
92
  share_embeddings: false
93
  share_decoder_embeddings: false
94
+ add_estimator: false
95
+ add_ffnbias: true
96
+ add_qkvbias: false
97
+ layer_norm: standard
98
+ mlp_activation_fn: gelu
99
+ hidden_size: 768
100
  encoder:
101
+ layers: 12
102
  decoder:
103
  layers: 2
104
+ heads: 16
105
  transformer_ff: 4096
106
  embeddings:
107
+ word_vec_size: 768
108
  position_encoding_type: "SinusoidalInterleaved"
109
 
110
+
eole-model/config.json CHANGED
@@ -1,68 +1,68 @@
1
  {
2
- "share_vocab": false,
3
- "vocab_size_multiple": 8,
4
- "src_vocab_size": 20000,
5
- "seed": 1234,
6
- "valid_metrics": [
7
- "BLEU"
8
  ],
 
9
  "save_data": "data",
10
- "tensorboard": true,
11
- "tgt_vocab": "en.eole.vocab",
12
  "tensorboard_log_dir": "tensorboard",
13
- "tensorboard_log_dir_dated": "tensorboard/Jun-03_06-03-40",
14
  "overwrite": true,
15
- "tgt_vocab_size": 20000,
16
- "src_vocab": "fa.eole.vocab",
17
- "n_sample": 0,
18
- "report_every": 100,
19
- "transforms": [
20
- "sentencepiece",
21
- "filtertoolong"
22
  ],
 
 
 
 
 
 
 
 
 
 
23
  "training": {
24
- "train_steps": 100000,
25
  "dropout_steps": [
26
  0
27
  ],
28
- "batch_size": 8000,
29
- "attention_dropout": [
30
- 0.1
31
- ],
32
- "accum_count": [
33
- 10
34
- ],
35
- "prefetch_factor": 32,
36
- "valid_steps": 5000,
37
- "adam_beta2": 0.998,
38
  "world_size": 1,
 
 
 
 
 
 
39
  "accum_steps": [
40
  0
41
  ],
42
- "model_path": "quickmt-fa-en-eole-model",
43
- "label_smoothing": 0.1,
44
- "keep_checkpoint": 4,
45
- "gpu_ranks": [
46
- 0
47
- ],
48
- "batch_size_multiple": 8,
49
- "warmup_steps": 4000,
50
  "decay_method": "noam",
51
- "max_grad_norm": 0.0,
 
52
  "batch_type": "tokens",
 
53
  "save_checkpoint_steps": 5000,
54
- "param_init_method": "xavier_uniform",
55
- "normalization": "tokens",
56
- "learning_rate": 2.0,
57
- "optim": "adamw",
58
- "compute_dtype": "torch.float16",
59
- "bucket_size": 128000,
60
  "dropout": [
61
  0.1
62
  ],
 
 
 
 
 
63
  "average_decay": 0.0001,
64
- "num_workers": 0,
65
- "valid_batch_size": 4096
 
 
 
 
 
 
 
 
 
 
 
66
  },
67
  "data": {
68
  "corpus_1": {
@@ -70,63 +70,97 @@
70
  "sentencepiece",
71
  "filtertoolong"
72
  ],
73
- "path_src": "train.fa",
74
  "path_align": null,
75
- "path_tgt": "train.en"
 
76
  },
77
- "valid": {
78
  "transforms": [
79
  "sentencepiece",
80
  "filtertoolong"
81
  ],
82
- "path_src": "dev.fa",
83
  "path_align": null,
84
- "path_tgt": "dev.en"
85
- }
86
- },
87
- "transforms_configs": {
88
- "sentencepiece": {
89
- "src_subword_model": "${MODEL_PATH}/fa.spm.model",
90
- "tgt_subword_model": "${MODEL_PATH}/en.spm.model"
91
  },
92
- "filtertoolong": {
93
- "src_seq_length": 256,
94
- "tgt_seq_length": 256
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
95
  }
96
  },
97
  "model": {
 
98
  "share_embeddings": false,
99
- "hidden_size": 1024,
100
- "architecture": "transformer",
101
- "share_decoder_embeddings": false,
102
- "heads": 8,
103
- "transformer_ff": 4096,
104
  "position_encoding_type": "SinusoidalInterleaved",
105
- "decoder": {
106
- "decoder_type": "transformer",
107
- "hidden_size": 1024,
108
- "n_positions": null,
109
- "transformer_ff": 4096,
110
- "heads": 8,
 
 
 
 
111
  "position_encoding_type": "SinusoidalInterleaved",
112
- "tgt_word_vec_size": 1024,
113
- "layers": 2
114
  },
115
  "encoder": {
116
- "hidden_size": 1024,
117
  "encoder_type": "transformer",
 
 
 
118
  "n_positions": null,
119
- "src_word_vec_size": 1024,
120
- "heads": 8,
121
  "transformer_ff": 4096,
 
 
 
 
 
 
 
 
 
122
  "position_encoding_type": "SinusoidalInterleaved",
123
- "layers": 8
 
 
 
 
 
 
 
 
 
 
 
 
 
 
124
  },
125
- "embeddings": {
126
- "word_vec_size": 1024,
127
- "src_word_vec_size": 1024,
128
- "tgt_word_vec_size": 1024,
129
- "position_encoding_type": "SinusoidalInterleaved"
130
  }
131
  }
132
  }
 
1
  {
2
+ "transforms": [
3
+ "sentencepiece",
4
+ "filtertoolong"
 
 
 
5
  ],
6
+ "share_vocab": false,
7
  "save_data": "data",
 
 
8
  "tensorboard_log_dir": "tensorboard",
 
9
  "overwrite": true,
10
+ "valid_metrics": [
11
+ "BLEU"
 
 
 
 
 
12
  ],
13
+ "report_every": 100,
14
+ "tgt_vocab": "faen/en.eole.vocab",
15
+ "vocab_size_multiple": 8,
16
+ "n_sample": 0,
17
+ "seed": 1234,
18
+ "tensorboard": true,
19
+ "tensorboard_log_dir_dated": "tensorboard/Jan-02_10-23-54",
20
+ "src_vocab_size": 32000,
21
+ "src_vocab": "faen/fa.eole.vocab",
22
+ "tgt_vocab_size": 32000,
23
  "training": {
 
24
  "dropout_steps": [
25
  0
26
  ],
 
 
 
 
 
 
 
 
 
 
27
  "world_size": 1,
28
+ "warmup_steps": 5000,
29
+ "num_workers": 0,
30
+ "batch_size_multiple": 8,
31
+ "compute_dtype": "torch.float16",
32
+ "param_init_method": "xavier_uniform",
33
+ "normalization": "tokens",
34
  "accum_steps": [
35
  0
36
  ],
 
 
 
 
 
 
 
 
37
  "decay_method": "noam",
38
+ "model_path": "quickmt-fa-en-eole-model",
39
+ "prefetch_factor": 128,
40
  "batch_type": "tokens",
41
+ "valid_batch_size": 2048,
42
  "save_checkpoint_steps": 5000,
43
+ "train_steps": 200000,
 
 
 
 
 
44
  "dropout": [
45
  0.1
46
  ],
47
+ "attention_dropout": [
48
+ 0.1
49
+ ],
50
+ "batch_size": 6000,
51
+ "label_smoothing": 0.1,
52
  "average_decay": 0.0001,
53
+ "learning_rate": 3.0,
54
+ "max_grad_norm": 0.0,
55
+ "accum_count": [
56
+ 20
57
+ ],
58
+ "gpu_ranks": [
59
+ 0
60
+ ],
61
+ "keep_checkpoint": 4,
62
+ "bucket_size": 256000,
63
+ "optim": "adamw",
64
+ "valid_steps": 5000,
65
+ "adam_beta2": 0.998
66
  },
67
  "data": {
68
  "corpus_1": {
 
70
  "sentencepiece",
71
  "filtertoolong"
72
  ],
73
+ "weight": 2,
74
  "path_align": null,
75
+ "path_tgt": "faen/train.cleaned.filtered.en",
76
+ "path_src": "faen/train.cleaned.filtered.fa"
77
  },
78
+ "corpus_2": {
79
  "transforms": [
80
  "sentencepiece",
81
  "filtertoolong"
82
  ],
83
+ "weight": 1,
84
  "path_align": null,
85
+ "path_tgt": "/home/mark/mt/data/newscrawl.2024.en",
86
+ "path_src": "/home/mark/mt/data/newscrawl.backtrans.fa"
 
 
 
 
 
87
  },
88
+ "corpus_3": {
89
+ "transforms": [
90
+ "sentencepiece",
91
+ "filtertoolong"
92
+ ],
93
+ "weight": 2,
94
+ "path_align": null,
95
+ "path_tgt": "/home/mark/mt/data/madlad.en",
96
+ "path_src": "/home/mark/mt/data/madlad.backtrans.fa"
97
+ },
98
+ "valid": {
99
+ "transforms": [
100
+ "sentencepiece",
101
+ "filtertoolong"
102
+ ],
103
+ "path_tgt": "faen/dev.en",
104
+ "path_src": "faen/dev.fa",
105
+ "path_align": null
106
  }
107
  },
108
  "model": {
109
+ "heads": 16,
110
  "share_embeddings": false,
111
+ "add_estimator": false,
112
+ "layer_norm": "standard",
 
 
 
113
  "position_encoding_type": "SinusoidalInterleaved",
114
+ "add_qkvbias": false,
115
+ "transformer_ff": 4096,
116
+ "share_decoder_embeddings": false,
117
+ "hidden_size": 768,
118
+ "architecture": "transformer",
119
+ "add_ffnbias": true,
120
+ "mlp_activation_fn": "gelu",
121
+ "embeddings": {
122
+ "word_vec_size": 768,
123
+ "src_word_vec_size": 768,
124
  "position_encoding_type": "SinusoidalInterleaved",
125
+ "tgt_word_vec_size": 768
 
126
  },
127
  "encoder": {
128
+ "heads": 16,
129
  "encoder_type": "transformer",
130
+ "layer_norm": "standard",
131
+ "position_encoding_type": "SinusoidalInterleaved",
132
+ "add_qkvbias": false,
133
  "n_positions": null,
 
 
134
  "transformer_ff": 4096,
135
+ "src_word_vec_size": 768,
136
+ "layers": 12,
137
+ "add_ffnbias": true,
138
+ "hidden_size": 768,
139
+ "mlp_activation_fn": "gelu"
140
+ },
141
+ "decoder": {
142
+ "heads": 16,
143
+ "layer_norm": "standard",
144
  "position_encoding_type": "SinusoidalInterleaved",
145
+ "add_qkvbias": false,
146
+ "decoder_type": "transformer",
147
+ "n_positions": null,
148
+ "transformer_ff": 4096,
149
+ "tgt_word_vec_size": 768,
150
+ "layers": 2,
151
+ "add_ffnbias": true,
152
+ "hidden_size": 768,
153
+ "mlp_activation_fn": "gelu"
154
+ }
155
+ },
156
+ "transforms_configs": {
157
+ "sentencepiece": {
158
+ "tgt_subword_model": "${MODEL_PATH}/en.spm.model",
159
+ "src_subword_model": "${MODEL_PATH}/fa.spm.model"
160
  },
161
+ "filtertoolong": {
162
+ "src_seq_length": 256,
163
+ "tgt_seq_length": 256
 
 
164
  }
165
  }
166
  }
eole-model/en.spm.model CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:8595a973d0f43dd25225a0eb3411ffbdf8ae4736e41d800995b1d25ac2c6019a
3
- size 588878
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:53931998683cf961ec8a9d3327deb4021d5bc12be2f2650758bb9a35dd828599
3
+ size 805200
eole-model/fa.spm.model CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:6aedb8783f67ae6310ebe5f9c0798f6fcdf7bfcff877d548d039d7bc53ca6548
3
- size 640837
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a0b876dcd23cce26a8b2dd35f3de1bea3dada039d27753498f5388960a97aa3f
3
+ size 889048
eole-model/model.00.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:efd987b8011fdbe578da4e97889cb0815e6f1f34de81fdeb56ae26f2db7e6849
3
- size 823882912
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7acbcafc1d1cc9c6a51f34719614d2346ec80019d637bd23dbee20fbaed5668c
3
+ size 829569112
eole-model/vocab.json CHANGED
The diff for this file is too large to render. See raw diff
 
model.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:9317a7e6b5c80b63a6827571c5a3b9fffde72d5af6aad96175d51459be94844b
3
- size 401699775
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:091265c533b556b8c9d111519dd8142ff49e901256e35613876ce92679c91487
3
+ size 407101843
source_vocabulary.json CHANGED
The diff for this file is too large to render. See raw diff
 
src.spm.model CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:6aedb8783f67ae6310ebe5f9c0798f6fcdf7bfcff877d548d039d7bc53ca6548
3
- size 640837
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a0b876dcd23cce26a8b2dd35f3de1bea3dada039d27753498f5388960a97aa3f
3
+ size 889048
target_vocabulary.json CHANGED
The diff for this file is too large to render. See raw diff
 
tgt.spm.model CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:8595a973d0f43dd25225a0eb3411ffbdf8ae4736e41d800995b1d25ac2c6019a
3
- size 588878
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:53931998683cf961ec8a9d3327deb4021d5bc12be2f2650758bb9a35dd828599
3
+ size 805200