lacos03 commited on
Commit
5f12a18
·
verified ·
1 Parent(s): ad88b2f

Model save

Browse files
Files changed (2) hide show
  1. README.md +80 -0
  2. generation_config.json +12 -0
README.md ADDED
@@ -0,0 +1,80 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: transformers
3
+ license: apache-2.0
4
+ base_model: facebook/bart-base
5
+ tags:
6
+ - generated_from_trainer
7
+ datasets:
8
+ - xsum
9
+ metrics:
10
+ - rouge
11
+ model-index:
12
+ - name: bart-base-finetuned-xsum
13
+ results:
14
+ - task:
15
+ name: Sequence-to-sequence Language Modeling
16
+ type: text2text-generation
17
+ dataset:
18
+ name: xsum
19
+ type: xsum
20
+ config: default
21
+ split: validation
22
+ args: default
23
+ metrics:
24
+ - name: Rouge1
25
+ type: rouge
26
+ value: 38.6459
27
+ ---
28
+
29
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
30
+ should probably proofread and complete it, then remove this comment. -->
31
+
32
+ # bart-base-finetuned-xsum
33
+
34
+ This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on the xsum dataset.
35
+ It achieves the following results on the evaluation set:
36
+ - Loss: 1.7558
37
+ - Rouge1: 38.6459
38
+ - Rouge2: 17.3528
39
+ - Rougel: 31.9807
40
+ - Rougelsum: 31.9765
41
+
42
+ ## Model description
43
+
44
+ More information needed
45
+
46
+ ## Intended uses & limitations
47
+
48
+ More information needed
49
+
50
+ ## Training and evaluation data
51
+
52
+ More information needed
53
+
54
+ ## Training procedure
55
+
56
+ ### Training hyperparameters
57
+
58
+ The following hyperparameters were used during training:
59
+ - learning_rate: 5.6e-05
60
+ - train_batch_size: 16
61
+ - eval_batch_size: 16
62
+ - seed: 42
63
+ - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
64
+ - lr_scheduler_type: linear
65
+ - num_epochs: 2
66
+
67
+ ### Training results
68
+
69
+ | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
70
+ |:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|:-------:|:---------:|
71
+ | No log | 1.0 | 12753 | 1.8305 | 37.5583 | 16.2117 | 30.8468 | 30.842 |
72
+ | No log | 2.0 | 25506 | 1.7558 | 38.6459 | 17.3528 | 31.9807 | 31.9765 |
73
+
74
+
75
+ ### Framework versions
76
+
77
+ - Transformers 4.51.3
78
+ - Pytorch 2.6.0+cu124
79
+ - Datasets 3.6.0
80
+ - Tokenizers 0.21.1
generation_config.json ADDED
@@ -0,0 +1,12 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token_id": 0,
3
+ "decoder_start_token_id": 2,
4
+ "early_stopping": true,
5
+ "eos_token_id": 2,
6
+ "forced_bos_token_id": 0,
7
+ "forced_eos_token_id": 2,
8
+ "no_repeat_ngram_size": 3,
9
+ "num_beams": 4,
10
+ "pad_token_id": 1,
11
+ "transformers_version": "4.51.3"
12
+ }