KalvinPhan commited on
Commit
15b1db9
·
verified ·
1 Parent(s): 5dbb1d7

End of training

Browse files
Files changed (3) hide show
  1. README.md +91 -0
  2. generation_config.json +6 -0
  3. model.safetensors +1 -1
README.md ADDED
@@ -0,0 +1,91 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: transformers
3
+ license: apache-2.0
4
+ base_model: google-t5/t5-base
5
+ tags:
6
+ - generated_from_trainer
7
+ metrics:
8
+ - rouge
9
+ model-index:
10
+ - name: t5-base-cnn
11
+ results: []
12
+ ---
13
+
14
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
15
+ should probably proofread and complete it, then remove this comment. -->
16
+
17
+ # t5-base-cnn
18
+
19
+ This model is a fine-tuned version of [google-t5/t5-base](https://huggingface.co/google-t5/t5-base) on an unknown dataset.
20
+ It achieves the following results on the evaluation set:
21
+ - Loss: nan
22
+ - Rouge1: 0.0
23
+ - Rouge2: 0.0
24
+ - Rougel: 0.0
25
+ - Rougelsum: 0.0
26
+
27
+ ## Model description
28
+
29
+ More information needed
30
+
31
+ ## Intended uses & limitations
32
+
33
+ More information needed
34
+
35
+ ## Training and evaluation data
36
+
37
+ More information needed
38
+
39
+ ## Training procedure
40
+
41
+ ### Training hyperparameters
42
+
43
+ The following hyperparameters were used during training:
44
+ - learning_rate: 1e-05
45
+ - train_batch_size: 2
46
+ - eval_batch_size: 8
47
+ - seed: 42
48
+ - gradient_accumulation_steps: 8
49
+ - total_train_batch_size: 16
50
+ - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
51
+ - lr_scheduler_type: linear
52
+ - lr_scheduler_warmup_ratio: 0.1
53
+ - num_epochs: 2
54
+ - mixed_precision_training: Native AMP
55
+
56
+ ### Training results
57
+
58
+ | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
59
+ |:-------------:|:------:|:-----:|:---------------:|:-------:|:-------:|:-------:|:---------:|
60
+ | 1.4159 | 0.0804 | 500 | 1.2113 | 26.8618 | 13.7400 | 22.6242 | 22.5895 |
61
+ | 1.3831 | 0.1608 | 1000 | 1.1629 | 27.1682 | 14.2922 | 22.9340 | 22.9407 |
62
+ | 1.4429 | 0.2412 | 1500 | 1.1740 | 26.3174 | 13.3691 | 22.1695 | 22.1162 |
63
+ | 1.449 | 0.3216 | 2000 | 1.1724 | 26.2449 | 13.2833 | 22.0983 | 22.0470 |
64
+ | 1.4263 | 0.4020 | 2500 | 1.1725 | 26.2259 | 13.2458 | 22.0971 | 22.0603 |
65
+ | 1.4482 | 0.4824 | 3000 | 1.1725 | 26.2291 | 13.2776 | 22.1100 | 22.0645 |
66
+ | 1.4241 | 0.5628 | 3500 | 1.1725 | 26.2545 | 13.2396 | 22.1039 | 22.0575 |
67
+ | 1.4162 | 0.6432 | 4000 | 1.1725 | 26.2714 | 13.2853 | 22.1341 | 22.1005 |
68
+ | 1.4222 | 0.7236 | 4500 | 1.1725 | 26.2655 | 13.2861 | 22.1245 | 22.0911 |
69
+ | 1.3981 | 0.8040 | 5000 | 1.1725 | 26.2334 | 13.2644 | 22.1037 | 22.0614 |
70
+ | 1.4436 | 0.8844 | 5500 | 1.1726 | 26.2563 | 13.2686 | 22.1135 | 22.0862 |
71
+ | 1.4375 | 0.9648 | 6000 | 1.1725 | 26.2445 | 13.2861 | 22.1145 | 22.0876 |
72
+ | 1.4218 | 1.0452 | 6500 | 1.1726 | 26.2385 | 13.2984 | 22.1388 | 22.1080 |
73
+ | 1.4205 | 1.1256 | 7000 | 1.1725 | 26.2472 | 13.2703 | 22.1307 | 22.0987 |
74
+ | 1.4015 | 1.2060 | 7500 | 1.1725 | 26.1918 | 13.2530 | 22.0918 | 22.0415 |
75
+ | 1.4307 | 1.2864 | 8000 | 1.1725 | 26.2286 | 13.2684 | 22.1133 | 22.0767 |
76
+ | 1.4436 | 1.3668 | 8500 | 1.1725 | 26.1782 | 13.2320 | 22.0806 | 22.0392 |
77
+ | 1.4147 | 1.4472 | 9000 | 1.1725 | 26.2300 | 13.2703 | 22.1014 | 22.0480 |
78
+ | 1.4059 | 1.5276 | 9500 | 1.1725 | 26.2352 | 13.2800 | 22.1248 | 22.0848 |
79
+ | 1.3891 | 1.608 | 10000 | 1.1725 | 26.2352 | 13.2800 | 22.1248 | 22.0848 |
80
+ | 0.0 | 1.6884 | 10500 | nan | 0.0 | 0.0 | 0.0 | 0.0 |
81
+ | 0.0 | 1.7688 | 11000 | nan | 0.0 | 0.0 | 0.0 | 0.0 |
82
+ | 0.0 | 1.8492 | 11500 | nan | 0.0 | 0.0 | 0.0 | 0.0 |
83
+ | 0.0 | 1.9296 | 12000 | nan | 0.0 | 0.0 | 0.0 | 0.0 |
84
+
85
+
86
+ ### Framework versions
87
+
88
+ - Transformers 4.52.4
89
+ - Pytorch 2.6.0+cu124
90
+ - Datasets 3.6.0
91
+ - Tokenizers 0.21.2
generation_config.json ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ {
2
+ "decoder_start_token_id": 0,
3
+ "eos_token_id": 1,
4
+ "pad_token_id": 0,
5
+ "transformers_version": "4.52.4"
6
+ }
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:690991e105717c10744cc2b5b3df47fe79f73fb2944c410732dfca4000cc0625
3
  size 891644712
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:16674d98527a3b23b3fc283aeda280e1f547d7e50ff3ac55aa68685ef478ea5f
3
  size 891644712