Seynro commited on
Commit
01526dd
·
verified ·
1 Parent(s): 0c8c663

Model save

Browse files
Files changed (3) hide show
  1. README.md +5 -12
  2. generation_config.json +3 -3
  3. model.safetensors +1 -1
README.md CHANGED
@@ -1,7 +1,7 @@
1
  ---
2
  library_name: transformers
3
  license: apache-2.0
4
- base_model: google/flan-t5-base
5
  tags:
6
  - generated_from_trainer
7
  model-index:
@@ -14,10 +14,7 @@ should probably proofread and complete it, then remove this comment. -->
14
 
15
  # aztelecom-complaint-resolver
16
 
17
- This model is a fine-tuned version of [google/flan-t5-base](https://huggingface.co/google/flan-t5-base) on an unknown dataset.
18
- It achieves the following results on the evaluation set:
19
- - Loss: 0.6030
20
- - Generation Quality: 0.0
21
 
22
  ## Model description
23
 
@@ -37,11 +34,11 @@ More information needed
37
 
38
  The following hyperparameters were used during training:
39
  - learning_rate: 3e-05
40
- - train_batch_size: 8
41
- - eval_batch_size: 8
42
  - seed: 42
43
  - gradient_accumulation_steps: 4
44
- - total_train_batch_size: 32
45
  - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
46
  - lr_scheduler_type: cosine
47
  - lr_scheduler_warmup_ratio: 0.1
@@ -49,10 +46,6 @@ The following hyperparameters were used during training:
49
 
50
  ### Training results
51
 
52
- | Training Loss | Epoch | Step | Validation Loss | Generation Quality |
53
- |:-------------:|:------:|:----:|:---------------:|:------------------:|
54
- | 0.8492 | 4.6729 | 500 | 0.7531 | 1.4626 |
55
- | 0.6632 | 9.3458 | 1000 | 0.6030 | 0.0 |
56
 
57
 
58
  ### Framework versions
 
1
  ---
2
  library_name: transformers
3
  license: apache-2.0
4
+ base_model: allmalab/gpt2-aze
5
  tags:
6
  - generated_from_trainer
7
  model-index:
 
14
 
15
  # aztelecom-complaint-resolver
16
 
17
+ This model is a fine-tuned version of [allmalab/gpt2-aze](https://huggingface.co/allmalab/gpt2-aze) on an unknown dataset.
 
 
 
18
 
19
  ## Model description
20
 
 
34
 
35
  The following hyperparameters were used during training:
36
  - learning_rate: 3e-05
37
+ - train_batch_size: 4
38
+ - eval_batch_size: 2
39
  - seed: 42
40
  - gradient_accumulation_steps: 4
41
+ - total_train_batch_size: 16
42
  - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
43
  - lr_scheduler_type: cosine
44
  - lr_scheduler_warmup_ratio: 0.1
 
46
 
47
  ### Training results
48
 
 
 
 
 
49
 
50
 
51
  ### Framework versions
generation_config.json CHANGED
@@ -1,6 +1,6 @@
1
  {
2
- "decoder_start_token_id": 0,
3
- "eos_token_id": 1,
4
- "pad_token_id": 0,
5
  "transformers_version": "4.49.0"
6
  }
 
1
  {
2
+ "_from_model_config": true,
3
+ "bos_token_id": 11,
4
+ "eos_token_id": 12,
5
  "transformers_version": "4.49.0"
6
  }
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:cb2357db4d5dbc8b50b73aee33258be5078a83ac5942fbb18c3611064b0ba01e
3
  size 540001920
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7f6a436412a1ae2b1d80130315db6a394d6c73e0daf8a271c69f56ce618b2250
3
  size 540001920