radinplaid commited on
Commit
b2c47b8
·
verified ·
1 Parent(s): 34f2b99

Upload folder using huggingface_hub

Browse files
.ipynb_checkpoints/README-checkpoint.md ADDED
@@ -0,0 +1,107 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ - sv
5
+ tags:
6
+ - translation
7
+ license: cc-by-4.0
8
+ datasets:
9
+ - quickmt/quickmt-train.sv-en
10
+ model-index:
11
+ - name: quickmt-sv-en
12
+ results:
13
+ - task:
14
+ name: Translation swe-eng
15
+ type: translation
16
+ args: swe-eng
17
+ dataset:
18
+ name: flores101-devtest
19
+ type: flores_101
20
+ args: swe_Latn eng_Latn devtest
21
+ metrics:
22
+ - name: BLEU
23
+ type: bleu
24
+ value: 47.59
25
+ - name: CHRF
26
+ type: chrf
27
+ value: 47.59
28
+ - name: COMET
29
+ type: comet
30
+ value: 47.59
31
+ ---
32
+
33
+
34
+ # `quickmt-sv-en` Neural Machine Translation Model
35
+
36
+ `quickmt-sv-en` is a reasonably fast and reasonably accurate neural machine translation model for translation from `sv` into `en`.
37
+
38
+
39
+ ## Try it on our Huggingface Space
40
+
41
+ Give it a try before downloading here: https://huggingface.co/spaces/quickmt/QuickMT-Demo
42
+
43
+
44
+ ## Model Information
45
+
46
+ * Trained using [`eole`](https://github.com/eole-nlp/eole)
47
+ * 200M parameter transformer 'big' with 8 encoder layers and 2 decoder layers
48
+ * 32k separate Sentencepiece vocabs
49
+ * Exported for fast inference to [CTranslate2](https://github.com/OpenNMT/CTranslate2) format
50
+ * The pytorch model (for use with [`eole`](https://github.com/eole-nlp/eole)) is available in this repository in the `eole-model` folder
51
+
52
+ See the `eole` model configuration in this repository for further details and the `eole-model` for the raw `eole` (pytorch) model.
53
+
54
+
55
+ ## Usage with `quickmt`
56
+
57
+ You must install the Nvidia cuda toolkit first, if you want to do GPU inference.
58
+
59
+ Next, install the `quickmt` python library and download the model:
60
+
61
+ ```bash
62
+ git clone https://github.com/quickmt/quickmt.git
63
+ pip install ./quickmt/
64
+
65
+ quickmt-model-download quickmt/quickmt-sv-en ./quickmt-sv-en
66
+ ```
67
+
68
+ Finally use the model in python:
69
+
70
+ ```python
71
+ from quickmt import Translator
72
+
73
+ # Auto-detects GPU, set to "cpu" to force CPU inference
74
+ t = Translator("./quickmt-sv-en/", device="auto")
75
+
76
+ # Translate - set beam size to 1 for faster speed (but lower quality)
77
+ sample_text = 'Dr. Ehud Ur, professor i medicin vid Dalhousie University i Halifax, Nova Scotia och ordförande för den kliniska och vetenskapliga avdelningen av den Kanadensiska diabetesföreningen, varnade för att forskningen fortfarande befinner sig i ett tidigt stadium.'
78
+
79
+ t(sample_text, beam_size=5)
80
+ ```
81
+
82
+ > 'Dr. Ehud Ur, professor of medicine at Dalhousie University in Halifax, Nova Scotia and chairman of the clinical and scientific department of the Canadian Diabetes Association, warned that the research is still at an early stage.'
83
+
84
+ ```python
85
+ # Get alternative translations by sampling
86
+ # You can pass any cTranslate2 `translate_batch` arguments
87
+ t([sample_text], sampling_temperature=1.2, beam_size=1, sampling_topk=50, sampling_topp=0.9)
88
+ ```
89
+
90
+ > 'Dr. Ehud Ur, a Professor of Medicine at Dalhousie University in Halifax, Nova Scotia and Chair of the Clinical and Scientific Division of the Canadian Diabetes Society, warned that the research is still at an early stage.'
91
+
92
+ The model is in `ctranslate2` format, and the tokenizers are `sentencepiece`, so you can use `ctranslate2` directly instead of through `quickmt`. It is also possible to get this model to work with e.g. [LibreTranslate](https://libretranslate.com/) which also uses `ctranslate2` and `sentencepiece`. A model in safetensors format to be used with `eole` is also provided.
93
+
94
+
95
+ ## Metrics
96
+
97
+ `bleu` and `chrf2` are calculated with [sacrebleu](https://github.com/mjpost/sacrebleu) on the [Flores200 `devtest` test set](https://huggingface.co/datasets/facebook/flores) ("swe_Latn"->"eng_Latn"). `comet22` with the [`comet`](https://github.com/Unbabel/COMET) library and the [default model](https://huggingface.co/Unbabel/wmt22-comet-da). "Time (s)" is the time in seconds to translate the flores-devtest dataset (1012 sentences) on an RTX 4070s GPU with batch size 32.
98
+
99
+
100
+ | | bleu | chrf2 | comet22 | Time (s) |
101
+ |:---------------------------------|-------:|--------:|----------:|-----------:|
102
+ | quickmt/quickmt-sv-en | 47.59 | 70.93 | 89.82 | 1.5 |
103
+ | Helsinki-NLP/opus-mt-sv-en | 45.51 | 68.88 | 89.08 | 3.25 |
104
+ | facebook/nllb-200-distilled-600M | 46.69 | 69.22 | 89.17 | 20.82 |
105
+ | facebook/nllb-200-distilled-1.3B | 49.29 | 71.12 | 89.99 | 36.76 |
106
+ | facebook/m2m100_418M | 40.05 | 65.13 | 85.91 | 17.6 |
107
+ | facebook/m2m100_1.2B | 45.34 | 68.78 | 88.95 | 34.15 |
.ipynb_checkpoints/eole-config-checkpoint.yaml ADDED
@@ -0,0 +1,95 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ## IO
2
+ save_data: data
3
+ overwrite: True
4
+ seed: 1234
5
+ report_every: 100
6
+ valid_metrics: ["BLEU"]
7
+ tensorboard: true
8
+ tensorboard_log_dir: tensorboard
9
+
10
+ ### Vocab
11
+ src_vocab: sv.eole.vocab
12
+ tgt_vocab: en.eole.vocab
13
+ src_vocab_size: 32000
14
+ tgt_vocab_size: 32000
15
+ vocab_size_multiple: 8
16
+ share_vocab: false
17
+ n_sample: 0
18
+
19
+ data:
20
+ corpus_1:
21
+ path_src: hf://quickmt/quickmt-train.sv-en/sv
22
+ path_tgt: hf://quickmt/quickmt-train.sv-en/en
23
+ path_sco: hf://quickmt/quickmt-train.sv-en/sco
24
+ valid:
25
+ path_src: valid.sv
26
+ path_tgt: valid.en
27
+
28
+ transforms: [sentencepiece, filtertoolong]
29
+ transforms_configs:
30
+ sentencepiece:
31
+ src_subword_model: "sv.spm.model"
32
+ tgt_subword_model: "en.spm.model"
33
+ filtertoolong:
34
+ src_seq_length: 256
35
+ tgt_seq_length: 256
36
+
37
+ training:
38
+ # Run configuration
39
+ model_path: quickmt-sv-en-eole-model
40
+ keep_checkpoint: 4
41
+ train_steps: 100000
42
+ save_checkpoint_steps: 5000
43
+ valid_steps: 5000
44
+
45
+ # Train on a single GPU
46
+ world_size: 1
47
+ gpu_ranks: [0]
48
+
49
+ # Batching 10240
50
+ batch_type: "tokens"
51
+ batch_size: 6000
52
+ valid_batch_size: 2048
53
+ batch_size_multiple: 8
54
+ accum_count: [20]
55
+ accum_steps: [0]
56
+
57
+ # Optimizer & Compute
58
+ compute_dtype: "fp16"
59
+ optim: "adamw"
60
+ #use_amp: False
61
+ learning_rate: 2.0
62
+ warmup_steps: 2000
63
+ decay_method: "noam"
64
+ adam_beta2: 0.998
65
+
66
+ # Data loading
67
+ bucket_size: 128000
68
+ num_workers: 4
69
+ prefetch_factor: 32
70
+
71
+ # Hyperparams
72
+ dropout_steps: [0]
73
+ dropout: [0.1]
74
+ attention_dropout: [0.1]
75
+ max_grad_norm: 0
76
+ label_smoothing: 0.1
77
+ average_decay: 0.0001
78
+ param_init_method: xavier_uniform
79
+ normalization: "tokens"
80
+
81
+ model:
82
+ architecture: "transformer"
83
+ share_embeddings: false
84
+ share_decoder_embeddings: true
85
+ hidden_size: 1024
86
+ encoder:
87
+ layers: 8
88
+ decoder:
89
+ layers: 2
90
+ heads: 8
91
+ transformer_ff: 4096
92
+ embeddings:
93
+ word_vec_size: 1024
94
+ position_encoding_type: "SinusoidalInterleaved"
95
+
README.md CHANGED
@@ -1,3 +1,107 @@
1
- ---
2
- license: cc-by-4.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ - sv
5
+ tags:
6
+ - translation
7
+ license: cc-by-4.0
8
+ datasets:
9
+ - quickmt/quickmt-train.sv-en
10
+ model-index:
11
+ - name: quickmt-sv-en
12
+ results:
13
+ - task:
14
+ name: Translation swe-eng
15
+ type: translation
16
+ args: swe-eng
17
+ dataset:
18
+ name: flores101-devtest
19
+ type: flores_101
20
+ args: swe_Latn eng_Latn devtest
21
+ metrics:
22
+ - name: BLEU
23
+ type: bleu
24
+ value: 47.59
25
+ - name: CHRF
26
+ type: chrf
27
+ value: 47.59
28
+ - name: COMET
29
+ type: comet
30
+ value: 47.59
31
+ ---
32
+
33
+
34
+ # `quickmt-sv-en` Neural Machine Translation Model
35
+
36
+ `quickmt-sv-en` is a reasonably fast and reasonably accurate neural machine translation model for translation from `sv` into `en`.
37
+
38
+
39
+ ## Try it on our Huggingface Space
40
+
41
+ Give it a try before downloading here: https://huggingface.co/spaces/quickmt/QuickMT-Demo
42
+
43
+
44
+ ## Model Information
45
+
46
+ * Trained using [`eole`](https://github.com/eole-nlp/eole)
47
+ * 200M parameter transformer 'big' with 8 encoder layers and 2 decoder layers
48
+ * 32k separate Sentencepiece vocabs
49
+ * Exported for fast inference to [CTranslate2](https://github.com/OpenNMT/CTranslate2) format
50
+ * The pytorch model (for use with [`eole`](https://github.com/eole-nlp/eole)) is available in this repository in the `eole-model` folder
51
+
52
+ See the `eole` model configuration in this repository for further details and the `eole-model` for the raw `eole` (pytorch) model.
53
+
54
+
55
+ ## Usage with `quickmt`
56
+
57
+ You must install the Nvidia cuda toolkit first, if you want to do GPU inference.
58
+
59
+ Next, install the `quickmt` python library and download the model:
60
+
61
+ ```bash
62
+ git clone https://github.com/quickmt/quickmt.git
63
+ pip install ./quickmt/
64
+
65
+ quickmt-model-download quickmt/quickmt-sv-en ./quickmt-sv-en
66
+ ```
67
+
68
+ Finally use the model in python:
69
+
70
+ ```python
71
+ from quickmt import Translator
72
+
73
+ # Auto-detects GPU, set to "cpu" to force CPU inference
74
+ t = Translator("./quickmt-sv-en/", device="auto")
75
+
76
+ # Translate - set beam size to 1 for faster speed (but lower quality)
77
+ sample_text = 'Dr. Ehud Ur, professor i medicin vid Dalhousie University i Halifax, Nova Scotia och ordförande för den kliniska och vetenskapliga avdelningen av den Kanadensiska diabetesföreningen, varnade för att forskningen fortfarande befinner sig i ett tidigt stadium.'
78
+
79
+ t(sample_text, beam_size=5)
80
+ ```
81
+
82
+ > 'Dr. Ehud Ur, professor of medicine at Dalhousie University in Halifax, Nova Scotia and chairman of the clinical and scientific department of the Canadian Diabetes Association, warned that the research is still at an early stage.'
83
+
84
+ ```python
85
+ # Get alternative translations by sampling
86
+ # You can pass any cTranslate2 `translate_batch` arguments
87
+ t([sample_text], sampling_temperature=1.2, beam_size=1, sampling_topk=50, sampling_topp=0.9)
88
+ ```
89
+
90
+ > 'Dr. Ehud Ur, a Professor of Medicine at Dalhousie University in Halifax, Nova Scotia and Chair of the Clinical and Scientific Division of the Canadian Diabetes Society, warned that the research is still at an early stage.'
91
+
92
+ The model is in `ctranslate2` format, and the tokenizers are `sentencepiece`, so you can use `ctranslate2` directly instead of through `quickmt`. It is also possible to get this model to work with e.g. [LibreTranslate](https://libretranslate.com/) which also uses `ctranslate2` and `sentencepiece`. A model in safetensors format to be used with `eole` is also provided.
93
+
94
+
95
+ ## Metrics
96
+
97
+ `bleu` and `chrf2` are calculated with [sacrebleu](https://github.com/mjpost/sacrebleu) on the [Flores200 `devtest` test set](https://huggingface.co/datasets/facebook/flores) ("swe_Latn"->"eng_Latn"). `comet22` with the [`comet`](https://github.com/Unbabel/COMET) library and the [default model](https://huggingface.co/Unbabel/wmt22-comet-da). "Time (s)" is the time in seconds to translate the flores-devtest dataset (1012 sentences) on an RTX 4070s GPU with batch size 32.
98
+
99
+
100
+ | | bleu | chrf2 | comet22 | Time (s) |
101
+ |:---------------------------------|-------:|--------:|----------:|-----------:|
102
+ | quickmt/quickmt-sv-en | 47.59 | 70.93 | 89.82 | 1.5 |
103
+ | Helsinki-NLP/opus-mt-sv-en | 45.51 | 68.88 | 89.08 | 3.25 |
104
+ | facebook/nllb-200-distilled-600M | 46.69 | 69.22 | 89.17 | 20.82 |
105
+ | facebook/nllb-200-distilled-1.3B | 49.29 | 71.12 | 89.99 | 36.76 |
106
+ | facebook/m2m100_418M | 40.05 | 65.13 | 85.91 | 17.6 |
107
+ | facebook/m2m100_1.2B | 45.34 | 68.78 | 88.95 | 34.15 |
config.json ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "add_source_bos": false,
3
+ "add_source_eos": false,
4
+ "bos_token": "<s>",
5
+ "decoder_start_token": "<s>",
6
+ "eos_token": "</s>",
7
+ "layer_norm_epsilon": 1e-06,
8
+ "multi_query_attention": false,
9
+ "unk_token": "<unk>"
10
+ }
eole-config.yaml ADDED
@@ -0,0 +1,95 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ## IO
2
+ save_data: data
3
+ overwrite: True
4
+ seed: 1234
5
+ report_every: 100
6
+ valid_metrics: ["BLEU"]
7
+ tensorboard: true
8
+ tensorboard_log_dir: tensorboard
9
+
10
+ ### Vocab
11
+ src_vocab: sv.eole.vocab
12
+ tgt_vocab: en.eole.vocab
13
+ src_vocab_size: 32000
14
+ tgt_vocab_size: 32000
15
+ vocab_size_multiple: 8
16
+ share_vocab: false
17
+ n_sample: 0
18
+
19
+ data:
20
+ corpus_1:
21
+ path_src: hf://quickmt/quickmt-train.sv-en/sv
22
+ path_tgt: hf://quickmt/quickmt-train.sv-en/en
23
+ path_sco: hf://quickmt/quickmt-train.sv-en/sco
24
+ valid:
25
+ path_src: valid.sv
26
+ path_tgt: valid.en
27
+
28
+ transforms: [sentencepiece, filtertoolong]
29
+ transforms_configs:
30
+ sentencepiece:
31
+ src_subword_model: "sv.spm.model"
32
+ tgt_subword_model: "en.spm.model"
33
+ filtertoolong:
34
+ src_seq_length: 256
35
+ tgt_seq_length: 256
36
+
37
+ training:
38
+ # Run configuration
39
+ model_path: quickmt-sv-en-eole-model
40
+ keep_checkpoint: 4
41
+ train_steps: 100000
42
+ save_checkpoint_steps: 5000
43
+ valid_steps: 5000
44
+
45
+ # Train on a single GPU
46
+ world_size: 1
47
+ gpu_ranks: [0]
48
+
49
+ # Batching 10240
50
+ batch_type: "tokens"
51
+ batch_size: 6000
52
+ valid_batch_size: 2048
53
+ batch_size_multiple: 8
54
+ accum_count: [20]
55
+ accum_steps: [0]
56
+
57
+ # Optimizer & Compute
58
+ compute_dtype: "fp16"
59
+ optim: "adamw"
60
+ #use_amp: False
61
+ learning_rate: 2.0
62
+ warmup_steps: 2000
63
+ decay_method: "noam"
64
+ adam_beta2: 0.998
65
+
66
+ # Data loading
67
+ bucket_size: 128000
68
+ num_workers: 4
69
+ prefetch_factor: 32
70
+
71
+ # Hyperparams
72
+ dropout_steps: [0]
73
+ dropout: [0.1]
74
+ attention_dropout: [0.1]
75
+ max_grad_norm: 0
76
+ label_smoothing: 0.1
77
+ average_decay: 0.0001
78
+ param_init_method: xavier_uniform
79
+ normalization: "tokens"
80
+
81
+ model:
82
+ architecture: "transformer"
83
+ share_embeddings: false
84
+ share_decoder_embeddings: true
85
+ hidden_size: 1024
86
+ encoder:
87
+ layers: 8
88
+ decoder:
89
+ layers: 2
90
+ heads: 8
91
+ transformer_ff: 4096
92
+ embeddings:
93
+ word_vec_size: 1024
94
+ position_encoding_type: "SinusoidalInterleaved"
95
+
eole-model/config.json ADDED
@@ -0,0 +1,132 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "tgt_vocab_size": 32000,
3
+ "n_sample": 0,
4
+ "tgt_vocab": "en.eole.vocab",
5
+ "overwrite": true,
6
+ "src_vocab": "sv.eole.vocab",
7
+ "src_vocab_size": 32000,
8
+ "save_data": "data",
9
+ "tensorboard_log_dir_dated": "tensorboard/Nov-09_21-14-34",
10
+ "report_every": 100,
11
+ "share_vocab": false,
12
+ "tensorboard": true,
13
+ "transforms": [
14
+ "sentencepiece",
15
+ "filtertoolong"
16
+ ],
17
+ "seed": 1234,
18
+ "valid_metrics": [
19
+ "BLEU"
20
+ ],
21
+ "tensorboard_log_dir": "tensorboard",
22
+ "vocab_size_multiple": 8,
23
+ "training": {
24
+ "optim": "adamw",
25
+ "world_size": 1,
26
+ "batch_size": 6000,
27
+ "param_init_method": "xavier_uniform",
28
+ "batch_size_multiple": 8,
29
+ "label_smoothing": 0.1,
30
+ "dropout_steps": [
31
+ 0
32
+ ],
33
+ "bucket_size": 128000,
34
+ "adam_beta2": 0.998,
35
+ "compute_dtype": "torch.float16",
36
+ "dropout": [
37
+ 0.1
38
+ ],
39
+ "valid_batch_size": 2048,
40
+ "model_path": "quickmt-sv-en-eole-model",
41
+ "valid_steps": 5000,
42
+ "average_decay": 0.0001,
43
+ "decay_method": "noam",
44
+ "batch_type": "tokens",
45
+ "prefetch_factor": 32,
46
+ "train_steps": 100000,
47
+ "num_workers": 0,
48
+ "normalization": "tokens",
49
+ "attention_dropout": [
50
+ 0.1
51
+ ],
52
+ "warmup_steps": 2000,
53
+ "accum_steps": [
54
+ 0
55
+ ],
56
+ "accum_count": [
57
+ 20
58
+ ],
59
+ "max_grad_norm": 0.0,
60
+ "save_checkpoint_steps": 5000,
61
+ "keep_checkpoint": 4,
62
+ "learning_rate": 2.0,
63
+ "gpu_ranks": [
64
+ 0
65
+ ]
66
+ },
67
+ "model": {
68
+ "architecture": "transformer",
69
+ "share_decoder_embeddings": true,
70
+ "hidden_size": 1024,
71
+ "share_embeddings": false,
72
+ "heads": 8,
73
+ "transformer_ff": 4096,
74
+ "position_encoding_type": "SinusoidalInterleaved",
75
+ "encoder": {
76
+ "n_positions": null,
77
+ "hidden_size": 1024,
78
+ "layers": 8,
79
+ "heads": 8,
80
+ "encoder_type": "transformer",
81
+ "transformer_ff": 4096,
82
+ "src_word_vec_size": 1024,
83
+ "position_encoding_type": "SinusoidalInterleaved"
84
+ },
85
+ "decoder": {
86
+ "tgt_word_vec_size": 1024,
87
+ "n_positions": null,
88
+ "hidden_size": 1024,
89
+ "layers": 2,
90
+ "heads": 8,
91
+ "decoder_type": "transformer",
92
+ "transformer_ff": 4096,
93
+ "position_encoding_type": "SinusoidalInterleaved"
94
+ },
95
+ "embeddings": {
96
+ "position_encoding_type": "SinusoidalInterleaved",
97
+ "tgt_word_vec_size": 1024,
98
+ "src_word_vec_size": 1024,
99
+ "word_vec_size": 1024
100
+ }
101
+ },
102
+ "data": {
103
+ "corpus_1": {
104
+ "transforms": [
105
+ "sentencepiece",
106
+ "filtertoolong"
107
+ ],
108
+ "path_src": "train.sv",
109
+ "path_tgt": "train.en",
110
+ "path_align": null
111
+ },
112
+ "valid": {
113
+ "transforms": [
114
+ "sentencepiece",
115
+ "filtertoolong"
116
+ ],
117
+ "path_src": "valid.sv",
118
+ "path_tgt": "valid.en",
119
+ "path_align": null
120
+ }
121
+ },
122
+ "transforms_configs": {
123
+ "sentencepiece": {
124
+ "src_subword_model": "${MODEL_PATH}/sv.spm.model",
125
+ "tgt_subword_model": "${MODEL_PATH}/en.spm.model"
126
+ },
127
+ "filtertoolong": {
128
+ "src_seq_length": 256,
129
+ "tgt_seq_length": 256
130
+ }
131
+ }
132
+ }
eole-model/en.spm.model ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0eeaccede2c05786b37d496470fbfc1e0509bf61be3e16913981d3a195873bdf
3
+ size 800835
eole-model/model.00.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:cbc8dc4e582896516e3bc64dac2d34b0870e3afa9dfc78321377a1574bc0986e
3
+ size 840314816
eole-model/sv.spm.model ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:de8b005b6a57ec60d00845c007527d6e6d2bcedaabcc842f15e579b294e5250c
3
+ size 814642
eole-model/vocab.json ADDED
The diff for this file is too large to render. See raw diff
 
model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2b6a7400b4c0ece91190c8ae780f31f208b8d33bf469dfd9dcb06b5323220c10
3
+ size 409915789
source_vocabulary.json ADDED
The diff for this file is too large to render. See raw diff
 
src.spm.model ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:de8b005b6a57ec60d00845c007527d6e6d2bcedaabcc842f15e579b294e5250c
3
+ size 814642
target_vocabulary.json ADDED
The diff for this file is too large to render. See raw diff
 
tgt.spm.model ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0eeaccede2c05786b37d496470fbfc1e0509bf61be3e16913981d3a195873bdf
3
+ size 800835