FiveC commited on
Commit
ca37705
·
verified ·
1 Parent(s): bcb1672

End of training

Browse files
Files changed (4) hide show
  1. README.md +64 -0
  2. dict.txt +0 -0
  3. sentencepiece.bpe.model +3 -0
  4. tokenizer_config.json +59 -0
README.md ADDED
@@ -0,0 +1,64 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: transformers
3
+ license: mit
4
+ base_model: IAmSkyDra/BARTBana_v5
5
+ tags:
6
+ - generated_from_trainer
7
+ metrics:
8
+ - sacrebleu
9
+ model-index:
10
+ - name: VieBahnar
11
+ results: []
12
+ ---
13
+
14
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
15
+ should probably proofread and complete it, then remove this comment. -->
16
+
17
+ # VieBahnar
18
+
19
+ This model is a fine-tuned version of [IAmSkyDra/BARTBana_v5](https://huggingface.co/IAmSkyDra/BARTBana_v5) on an unknown dataset.
20
+ It achieves the following results on the evaluation set:
21
+ - Loss: 3.8130
22
+ - Sacrebleu: 0.6372
23
+
24
+ ## Model description
25
+
26
+ More information needed
27
+
28
+ ## Intended uses & limitations
29
+
30
+ More information needed
31
+
32
+ ## Training and evaluation data
33
+
34
+ More information needed
35
+
36
+ ## Training procedure
37
+
38
+ ### Training hyperparameters
39
+
40
+ The following hyperparameters were used during training:
41
+ - learning_rate: 2e-05
42
+ - train_batch_size: 64
43
+ - eval_batch_size: 64
44
+ - seed: 42
45
+ - optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
46
+ - lr_scheduler_type: linear
47
+ - num_epochs: 3
48
+ - mixed_precision_training: Native AMP
49
+
50
+ ### Training results
51
+
52
+ | Training Loss | Epoch | Step | Validation Loss | Sacrebleu |
53
+ |:-------------:|:-----:|:----:|:---------------:|:---------:|
54
+ | 2.5015 | 1.0 | 812 | 4.0753 | 0.4328 |
55
+ | 2.2409 | 2.0 | 1624 | 3.8791 | 0.5922 |
56
+ | 2.1494 | 3.0 | 2436 | 3.8130 | 0.6372 |
57
+
58
+
59
+ ### Framework versions
60
+
61
+ - Transformers 5.0.0
62
+ - Pytorch 2.10.0+cu128
63
+ - Datasets 4.0.0
64
+ - Tokenizers 0.22.2
dict.txt ADDED
The diff for this file is too large to render. See raw diff
 
sentencepiece.bpe.model ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:cfc8146abe2a0488e9e2a0c56de7952f7c11ab059eca145a0a727afce0db2865
3
+ size 5069051
tokenizer_config.json ADDED
@@ -0,0 +1,59 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "added_tokens_decoder": {
3
+ "0": {
4
+ "content": "<s>",
5
+ "lstrip": false,
6
+ "normalized": false,
7
+ "rstrip": false,
8
+ "single_word": false,
9
+ "special": true
10
+ },
11
+ "1": {
12
+ "content": "<pad>",
13
+ "lstrip": false,
14
+ "normalized": false,
15
+ "rstrip": false,
16
+ "single_word": false,
17
+ "special": true
18
+ },
19
+ "2": {
20
+ "content": "</s>",
21
+ "lstrip": false,
22
+ "normalized": false,
23
+ "rstrip": false,
24
+ "single_word": false,
25
+ "special": true
26
+ },
27
+ "3": {
28
+ "content": "<unk>",
29
+ "lstrip": false,
30
+ "normalized": false,
31
+ "rstrip": false,
32
+ "single_word": false,
33
+ "special": true
34
+ },
35
+ "40029": {
36
+ "content": "<mask>",
37
+ "lstrip": true,
38
+ "normalized": true,
39
+ "rstrip": false,
40
+ "single_word": false,
41
+ "special": true
42
+ }
43
+ },
44
+ "additional_special_tokens": null,
45
+ "backend": "sentencepiece",
46
+ "bos_token": "<s>",
47
+ "clean_up_tokenization_spaces": false,
48
+ "cls_token": "<s>",
49
+ "eos_token": "</s>",
50
+ "is_local": false,
51
+ "mask_token": "<mask>",
52
+ "model_max_length": 1000000000000000019884624838656,
53
+ "model_specific_special_tokens": {},
54
+ "pad_token": "<pad>",
55
+ "sep_token": "</s>",
56
+ "sp_model_kwargs": {},
57
+ "tokenizer_class": "BartphoTokenizer",
58
+ "unk_token": "<unk>"
59
+ }