TungCan commited on
Commit
b2da735
·
verified ·
1 Parent(s): bb6ba31

Upload folder using huggingface_hub

Browse files
Files changed (6) hide show
  1. README.md +1 -69
  2. optimizer.pt +3 -0
  3. rng_state.pth +3 -0
  4. scheduler.pt +3 -0
  5. trainer_state.json +106 -0
  6. training_args.bin +1 -1
README.md CHANGED
@@ -1,58 +1,6 @@
1
  ---
2
- license: mit
3
- datasets:
4
- - bmd1905/vi-error-correction-2.0
5
- metrics:
6
- - accuracy
7
- - bleu
8
- base_model:
9
- - vinai/bartpho-syllable
10
- pipeline_tag: text-generation
11
  library_name: peft
12
  ---
13
-
14
- <!-- This model card has been generated automatically according to the information the Trainer had access to. You
15
- should probably proofread and complete it, then remove this comment. -->
16
-
17
- # vietnamese-correction-lora-v2
18
-
19
- This model is a fine-tuned version of [vinai/bartpho-syllable](https://huggingface.co/vinai/bartpho-syllable) on an unknown dataset.
20
- It achieves the following results on the evaluation set:
21
- - Loss: 0.1776123046875
22
- - Sacrebleu: 25.07128550273525
23
- - Precision: 0.9230769230769231
24
- - Recall: 0.6
25
- - F1 Score: 0.7272727272727274
26
- - eval_samples_per_second: 6.776
27
-
28
- ## Model description
29
-
30
- More information needed
31
-
32
- ## Intended uses & limitations
33
-
34
- More information needed
35
-
36
- ## Training and evaluation data
37
-
38
- ```html
39
- DatasetDict({
40
- train: Dataset({
41
- features: ['input', 'output'],
42
- num_rows: 800000
43
- })
44
- val: Dataset({
45
- features: ['input', 'output'],
46
- num_rows: 200000
47
- })
48
- test: Dataset({
49
- features: ['input', 'output'],
50
- num_rows: 40000
51
- })
52
- })
53
- ```
54
-
55
-
56
  ## Training procedure
57
 
58
 
@@ -66,23 +14,7 @@ The following `bitsandbytes` quantization config was used during training:
66
  - bnb_4bit_quant_type: nf4
67
  - bnb_4bit_use_double_quant: True
68
  - bnb_4bit_compute_dtype: float16
69
- ### Training hyperparameters
70
-
71
- The following hyperparameters were used during training:
72
- - trainable params: 25,165,824 || all params: 326,801,408 || trainable%: 7.700647360735974
73
- - Num examples = 800,000
74
- - Num Epochs = 2
75
- - Instantaneous batch size per device = 12
76
- - Total train batch size (w. parallel, distributed & accumulation) = 72
77
- - Gradient Accumulation steps = 6
78
- - Total optimization steps = 22,222
79
- - Number of trainable parameters = 25,165,824
80
-
81
  ### Framework versions
82
 
 
83
  - PEFT 0.4.0
84
- - PEFT 0.14.0
85
- - Transformers 4.47.0
86
- - Pytorch 2.5.1+cu121
87
- - Datasets 3.3.1
88
- - Tokenizers 0.21.0
 
1
  ---
 
 
 
 
 
 
 
 
 
2
  library_name: peft
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
4
  ## Training procedure
5
 
6
 
 
14
  - bnb_4bit_quant_type: nf4
15
  - bnb_4bit_use_double_quant: True
16
  - bnb_4bit_compute_dtype: float16
 
 
 
 
 
 
 
 
 
 
 
 
17
  ### Framework versions
18
 
19
+
20
  - PEFT 0.4.0
 
 
 
 
 
optimizer.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:503a6bbddeb2280904eaed9a9d691f3ef56b6727331d687b35ea2db18d81ef51
3
+ size 201469562
rng_state.pth ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0a8f049d0dc7f221e1d8058fd45a0cc9b5e2d694b045a1cd41f34bd549f036c7
3
+ size 14244
scheduler.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9cdf108449d81f49456c5c1d2c15135a982f7e11f00496563cd74e5af9e83359
3
+ size 1064
trainer_state.json ADDED
@@ -0,0 +1,106 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "best_metric": null,
3
+ "best_model_checkpoint": null,
4
+ "epoch": 3.999727273612748,
5
+ "global_step": 6844,
6
+ "is_hyper_param_search": false,
7
+ "is_local_process_zero": true,
8
+ "is_world_process_zero": true,
9
+ "log_history": [
10
+ {
11
+ "epoch": 0.86,
12
+ "learning_rate": 1.65919930625813e-05,
13
+ "loss": 1.2243,
14
+ "step": 1200
15
+ },
16
+ {
17
+ "epoch": 0.86,
18
+ "eval_f1_score": 0.7272727272727274,
19
+ "eval_loss": 0.3056640625,
20
+ "eval_precision": 0.9230769230769231,
21
+ "eval_recall": 0.6,
22
+ "eval_runtime": 22188.3103,
23
+ "eval_sacrebleu": 22.803636183857474,
24
+ "eval_samples_per_second": 4.507,
25
+ "eval_steps_per_second": 0.376,
26
+ "step": 1200
27
+ },
28
+ {
29
+ "epoch": 1.4,
30
+ "learning_rate": 1.4433110563958261e-05,
31
+ "loss": 0.4402,
32
+ "step": 2400
33
+ },
34
+ {
35
+ "epoch": 1.4,
36
+ "eval_f1_score": 0.8571428571428571,
37
+ "eval_loss": 0.3037109375,
38
+ "eval_precision": 0.8571428571428571,
39
+ "eval_recall": 0.8571428571428571,
40
+ "eval_runtime": 41845.2709,
41
+ "eval_sacrebleu": 27.58721404301854,
42
+ "eval_samples_per_second": 7.169,
43
+ "eval_steps_per_second": 0.478,
44
+ "step": 2400
45
+ },
46
+ {
47
+ "epoch": 2.1,
48
+ "learning_rate": 1.1619181615664206e-05,
49
+ "loss": 0.3997,
50
+ "step": 3600
51
+ },
52
+ {
53
+ "epoch": 2.1,
54
+ "eval_f1_score": 0.8571428571428571,
55
+ "eval_loss": 0.283447265625,
56
+ "eval_precision": 0.8571428571428571,
57
+ "eval_recall": 0.8571428571428571,
58
+ "eval_runtime": 30810.3722,
59
+ "eval_sacrebleu": 28.184010528346654,
60
+ "eval_samples_per_second": 9.737,
61
+ "eval_steps_per_second": 0.649,
62
+ "step": 3600
63
+ },
64
+ {
65
+ "epoch": 2.81,
66
+ "learning_rate": 8.80525266737015e-06,
67
+ "loss": 0.3927,
68
+ "step": 4800
69
+ },
70
+ {
71
+ "epoch": 2.81,
72
+ "eval_f1_score": 0.8571428571428571,
73
+ "eval_loss": 0.29443359375,
74
+ "eval_precision": 0.8571428571428571,
75
+ "eval_recall": 0.8571428571428571,
76
+ "eval_runtime": 30830.552,
77
+ "eval_sacrebleu": 28.208588875941434,
78
+ "eval_samples_per_second": 9.731,
79
+ "eval_steps_per_second": 0.649,
80
+ "step": 4800
81
+ },
82
+ {
83
+ "epoch": 3.51,
84
+ "learning_rate": 5.991323719076094e-06,
85
+ "loss": 0.3993,
86
+ "step": 6000
87
+ },
88
+ {
89
+ "epoch": 3.51,
90
+ "eval_f1_score": 0.8571428571428571,
91
+ "eval_loss": 0.279052734375,
92
+ "eval_precision": 0.8571428571428571,
93
+ "eval_recall": 0.8571428571428571,
94
+ "eval_runtime": 30959.953,
95
+ "eval_sacrebleu": 28.577699452894848,
96
+ "eval_samples_per_second": 9.69,
97
+ "eval_steps_per_second": 0.646,
98
+ "step": 6000
99
+ }
100
+ ],
101
+ "max_steps": 8555,
102
+ "num_train_epochs": 5,
103
+ "total_flos": 2.97641943465984e+17,
104
+ "trial_name": null,
105
+ "trial_params": null
106
+ }
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:58c9075965d741872444ce679419408a906a36a4990cdbfef5a7c249390c824e
3
  size 4600
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:125adbe784ad4664d78128701bd159c971ff16b9e7f0f220f90bfbee58ff093c
3
  size 4600