NetherQuartz commited on
Commit
4a53194
·
verified ·
1 Parent(s): 5777d2f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +82 -76
README.md CHANGED
@@ -1,76 +1,82 @@
1
- ---
2
- library_name: transformers
3
- license: cc-by-4.0
4
- base_model: Helsinki-NLP/opus-mt-ru-en
5
- tags:
6
- - generated_from_trainer
7
- metrics:
8
- - bleu
9
- model-index:
10
- - name: tatoeba-ru-tok
11
- results: []
12
- ---
13
-
14
- <!-- This model card has been generated automatically according to the information the Trainer had access to. You
15
- should probably proofread and complete it, then remove this comment. -->
16
-
17
- # tatoeba-ru-tok
18
-
19
- This model is a fine-tuned version of [Helsinki-NLP/opus-mt-ru-en](https://huggingface.co/Helsinki-NLP/opus-mt-ru-en) on an unknown dataset.
20
- It achieves the following results on the evaluation set:
21
- - Loss: 0.5932
22
- - Bleu: 47.6666
23
-
24
- ## Model description
25
-
26
- More information needed
27
-
28
- ## Intended uses & limitations
29
-
30
- More information needed
31
-
32
- ## Training and evaluation data
33
-
34
- More information needed
35
-
36
- ## Training procedure
37
-
38
- ### Training hyperparameters
39
-
40
- The following hyperparameters were used during training:
41
- - learning_rate: 2e-05
42
- - train_batch_size: 64
43
- - eval_batch_size: 64
44
- - seed: 42
45
- - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
46
- - lr_scheduler_type: linear
47
- - num_epochs: 15
48
- - mixed_precision_training: Native AMP
49
-
50
- ### Training results
51
-
52
- | Training Loss | Epoch | Step | Validation Loss | Bleu |
53
- |:-------------:|:-----:|:-----:|:---------------:|:-------:|
54
- | 1.0515 | 1.0 | 1167 | 0.8539 | 37.3803 |
55
- | 0.8186 | 2.0 | 2334 | 0.7284 | 41.5032 |
56
- | 0.7002 | 3.0 | 3501 | 0.6803 | 43.5555 |
57
- | 0.6501 | 4.0 | 4668 | 0.6485 | 45.0023 |
58
- | 0.6091 | 5.0 | 5835 | 0.6302 | 45.6329 |
59
- | 0.5778 | 6.0 | 7002 | 0.6180 | 45.8879 |
60
- | 0.553 | 7.0 | 8169 | 0.6109 | 46.6945 |
61
- | 0.533 | 8.0 | 9336 | 0.6041 | 46.6169 |
62
- | 0.5128 | 9.0 | 10503 | 0.6002 | 47.0549 |
63
- | 0.5015 | 10.0 | 11670 | 0.5961 | 47.2017 |
64
- | 0.4851 | 11.0 | 12837 | 0.5962 | 47.5851 |
65
- | 0.4795 | 12.0 | 14004 | 0.5939 | 47.5400 |
66
- | 0.4659 | 13.0 | 15171 | 0.5932 | 47.6666 |
67
- | 0.4608 | 14.0 | 16338 | 0.5939 | 47.6703 |
68
- | 0.4593 | 15.0 | 17505 | 0.5936 | 47.6572 |
69
-
70
-
71
- ### Framework versions
72
-
73
- - Transformers 4.52.4
74
- - Pytorch 2.7.1+cu128
75
- - Datasets 3.6.0
76
- - Tokenizers 0.21.1
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: transformers
3
+ license: cc-by-4.0
4
+ base_model: Helsinki-NLP/opus-mt-ru-en
5
+ tags:
6
+ - translation
7
+ - generated_from_trainer
8
+ metrics:
9
+ - bleu
10
+ model-index:
11
+ - name: tatoeba-ru-tok
12
+ results: []
13
+ language:
14
+ - ru
15
+ - tok
16
+ datasets:
17
+ - NetherQuartz/tatoeba-tokipona
18
+ ---
19
+
20
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
21
+ should probably proofread and complete it, then remove this comment. -->
22
+
23
+ # tatoeba-ru-tok
24
+
25
+ This model is a fine-tuned version of [Helsinki-NLP/opus-mt-ru-en](https://huggingface.co/Helsinki-NLP/opus-mt-ru-en) on an unknown dataset.
26
+ It achieves the following results on the evaluation set:
27
+ - Loss: 0.5932
28
+ - Bleu: 47.6666
29
+
30
+ ## Model description
31
+
32
+ More information needed
33
+
34
+ ## Intended uses & limitations
35
+
36
+ More information needed
37
+
38
+ ## Training and evaluation data
39
+
40
+ More information needed
41
+
42
+ ## Training procedure
43
+
44
+ ### Training hyperparameters
45
+
46
+ The following hyperparameters were used during training:
47
+ - learning_rate: 2e-05
48
+ - train_batch_size: 64
49
+ - eval_batch_size: 64
50
+ - seed: 42
51
+ - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
52
+ - lr_scheduler_type: linear
53
+ - num_epochs: 15
54
+ - mixed_precision_training: Native AMP
55
+
56
+ ### Training results
57
+
58
+ | Training Loss | Epoch | Step | Validation Loss | Bleu |
59
+ |:-------------:|:-----:|:-----:|:---------------:|:-------:|
60
+ | 1.0515 | 1.0 | 1167 | 0.8539 | 37.3803 |
61
+ | 0.8186 | 2.0 | 2334 | 0.7284 | 41.5032 |
62
+ | 0.7002 | 3.0 | 3501 | 0.6803 | 43.5555 |
63
+ | 0.6501 | 4.0 | 4668 | 0.6485 | 45.0023 |
64
+ | 0.6091 | 5.0 | 5835 | 0.6302 | 45.6329 |
65
+ | 0.5778 | 6.0 | 7002 | 0.6180 | 45.8879 |
66
+ | 0.553 | 7.0 | 8169 | 0.6109 | 46.6945 |
67
+ | 0.533 | 8.0 | 9336 | 0.6041 | 46.6169 |
68
+ | 0.5128 | 9.0 | 10503 | 0.6002 | 47.0549 |
69
+ | 0.5015 | 10.0 | 11670 | 0.5961 | 47.2017 |
70
+ | 0.4851 | 11.0 | 12837 | 0.5962 | 47.5851 |
71
+ | 0.4795 | 12.0 | 14004 | 0.5939 | 47.5400 |
72
+ | 0.4659 | 13.0 | 15171 | 0.5932 | 47.6666 |
73
+ | 0.4608 | 14.0 | 16338 | 0.5939 | 47.6703 |
74
+ | 0.4593 | 15.0 | 17505 | 0.5936 | 47.6572 |
75
+
76
+
77
+ ### Framework versions
78
+
79
+ - Transformers 4.52.4
80
+ - Pytorch 2.7.1+cu128
81
+ - Datasets 3.6.0
82
+ - Tokenizers 0.21.1