emilstabil commited on
Commit
a12ff5e
·
1 Parent(s): 7d01b29

update model card README.md

Browse files
Files changed (1) hide show
  1. README.md +75 -0
README.md ADDED
@@ -0,0 +1,75 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ tags:
4
+ - generated_from_trainer
5
+ metrics:
6
+ - rouge
7
+ model-index:
8
+ - name: DanSumT5-smallV_55565
9
+ results: []
10
+ ---
11
+
12
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
13
+ should probably proofread and complete it, then remove this comment. -->
14
+
15
+ # DanSumT5-smallV_55565
16
+
17
+ This model is a fine-tuned version of [Danish-summarisation/DanSumT5-small](https://huggingface.co/Danish-summarisation/DanSumT5-small) on the None dataset.
18
+ It achieves the following results on the evaluation set:
19
+ - Loss: 2.6152
20
+ - Rouge1: 33.2076
21
+ - Rouge2: 9.7687
22
+ - Rougel: 19.2885
23
+ - Rougelsum: 30.5358
24
+ - Gen Len: 125.4515
25
+
26
+ ## Model description
27
+
28
+ More information needed
29
+
30
+ ## Intended uses & limitations
31
+
32
+ More information needed
33
+
34
+ ## Training and evaluation data
35
+
36
+ More information needed
37
+
38
+ ## Training procedure
39
+
40
+ ### Training hyperparameters
41
+
42
+ The following hyperparameters were used during training:
43
+ - learning_rate: 5e-05
44
+ - train_batch_size: 80
45
+ - eval_batch_size: 80
46
+ - seed: 42
47
+ - gradient_accumulation_steps: 4
48
+ - total_train_batch_size: 320
49
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
50
+ - lr_scheduler_type: linear
51
+ - num_epochs: 11
52
+
53
+ ### Training results
54
+
55
+ | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
56
+ |:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|:---------:|:--------:|
57
+ | No log | 1.0 | 6 | 2.7915 | 28.1925 | 6.0798 | 16.0171 | 25.4406 | 117.8734 |
58
+ | No log | 2.0 | 12 | 2.7309 | 30.7001 | 7.8441 | 17.6277 | 28.1658 | 123.7384 |
59
+ | No log | 3.0 | 18 | 2.6932 | 31.9139 | 8.8623 | 18.4491 | 29.2043 | 125.2321 |
60
+ | No log | 4.0 | 24 | 2.6673 | 32.1541 | 9.2757 | 18.7349 | 29.3827 | 125.1941 |
61
+ | No log | 5.0 | 30 | 2.6506 | 32.6317 | 9.6369 | 18.9798 | 30.0012 | 125.6034 |
62
+ | No log | 6.0 | 36 | 2.6391 | 32.7076 | 9.7264 | 18.9488 | 29.9797 | 125.3376 |
63
+ | No log | 7.0 | 42 | 2.6307 | 32.9958 | 9.8324 | 19.0395 | 30.2766 | 125.0 |
64
+ | No log | 8.0 | 48 | 2.6241 | 33.2035 | 9.9866 | 19.1625 | 30.5136 | 125.2321 |
65
+ | No log | 9.0 | 54 | 2.6190 | 33.4626 | 10.076 | 19.2999 | 30.6955 | 125.4515 |
66
+ | No log | 10.0 | 60 | 2.6161 | 33.3145 | 9.9106 | 19.3186 | 30.6521 | 125.4515 |
67
+ | No log | 11.0 | 66 | 2.6152 | 33.2076 | 9.7687 | 19.2885 | 30.5358 | 125.4515 |
68
+
69
+
70
+ ### Framework versions
71
+
72
+ - Transformers 4.30.2
73
+ - Pytorch 1.12.1+git7548e2f
74
+ - Datasets 2.13.2
75
+ - Tokenizers 0.13.3