MGZON commited on
Commit
92b668c
·
verified ·
1 Parent(s): 2a4ba63

End of training

Browse files
Files changed (3) hide show
  1. README.md +12 -9
  2. tokenizer.json +2 -2
  3. tokenizer_config.json +7 -0
README.md CHANGED
@@ -1,7 +1,7 @@
1
  ---
2
  library_name: transformers
3
  license: apache-2.0
4
- base_model: google/flan-t5-base
5
  tags:
6
  - generated_from_trainer
7
  model-index:
@@ -14,9 +14,9 @@ should probably proofread and complete it, then remove this comment. -->
14
 
15
  # mgzon-flan-t5-base
16
 
17
- This model is a fine-tuned version of [google/flan-t5-base](https://huggingface.co/google/flan-t5-base) on an unknown dataset.
18
  It achieves the following results on the evaluation set:
19
- - Loss: 1.1631
20
 
21
  ## Model description
22
 
@@ -35,21 +35,24 @@ More information needed
35
  ### Training hyperparameters
36
 
37
  The following hyperparameters were used during training:
38
- - learning_rate: 5e-05
39
- - train_batch_size: 4
40
- - eval_batch_size: 4
41
  - seed: 42
 
 
42
  - optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
43
  - lr_scheduler_type: linear
44
  - num_epochs: 3
 
45
 
46
  ### Training results
47
 
48
  | Training Loss | Epoch | Step | Validation Loss |
49
  |:-------------:|:-----:|:----:|:---------------:|
50
- | No log | 1.0 | 45 | 2.8346 |
51
- | No log | 2.0 | 90 | 1.3637 |
52
- | No log | 3.0 | 135 | 1.1631 |
53
 
54
 
55
  ### Framework versions
 
1
  ---
2
  library_name: transformers
3
  license: apache-2.0
4
+ base_model: MGZON/mgzon-flan-t5-base
5
  tags:
6
  - generated_from_trainer
7
  model-index:
 
14
 
15
  # mgzon-flan-t5-base
16
 
17
+ This model is a fine-tuned version of [MGZON/mgzon-flan-t5-base](https://huggingface.co/MGZON/mgzon-flan-t5-base) on the None dataset.
18
  It achieves the following results on the evaluation set:
19
+ - Loss: nan
20
 
21
  ## Model description
22
 
 
35
  ### Training hyperparameters
36
 
37
  The following hyperparameters were used during training:
38
+ - learning_rate: 3e-05
39
+ - train_batch_size: 2
40
+ - eval_batch_size: 2
41
  - seed: 42
42
+ - gradient_accumulation_steps: 2
43
+ - total_train_batch_size: 4
44
  - optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
45
  - lr_scheduler_type: linear
46
  - num_epochs: 3
47
+ - mixed_precision_training: Native AMP
48
 
49
  ### Training results
50
 
51
  | Training Loss | Epoch | Step | Validation Loss |
52
  |:-------------:|:-----:|:----:|:---------------:|
53
+ | 0.0 | 1.0 | 744 | nan |
54
+ | 0.0 | 2.0 | 1488 | nan |
55
+ | 0.0 | 3.0 | 2232 | nan |
56
 
57
 
58
  ### Framework versions
tokenizer.json CHANGED
@@ -2,13 +2,13 @@
2
  "version": "1.0",
3
  "truncation": {
4
  "direction": "Right",
5
- "max_length": 128,
6
  "strategy": "LongestFirst",
7
  "stride": 0
8
  },
9
  "padding": {
10
  "strategy": {
11
- "Fixed": 128
12
  },
13
  "direction": "Right",
14
  "pad_to_multiple_of": null,
 
2
  "version": "1.0",
3
  "truncation": {
4
  "direction": "Right",
5
+ "max_length": 256,
6
  "strategy": "LongestFirst",
7
  "stride": 0
8
  },
9
  "padding": {
10
  "strategy": {
11
+ "Fixed": 256
12
  },
13
  "direction": "Right",
14
  "pad_to_multiple_of": null,
tokenizer_config.json CHANGED
@@ -932,9 +932,16 @@
932
  "eos_token": "</s>",
933
  "extra_ids": 100,
934
  "extra_special_tokens": {},
 
935
  "model_max_length": 512,
 
936
  "pad_token": "<pad>",
 
 
937
  "sp_model_kwargs": {},
 
938
  "tokenizer_class": "T5Tokenizer",
 
 
939
  "unk_token": "<unk>"
940
  }
 
932
  "eos_token": "</s>",
933
  "extra_ids": 100,
934
  "extra_special_tokens": {},
935
+ "max_length": 128,
936
  "model_max_length": 512,
937
+ "pad_to_multiple_of": null,
938
  "pad_token": "<pad>",
939
+ "pad_token_type_id": 0,
940
+ "padding_side": "right",
941
  "sp_model_kwargs": {},
942
+ "stride": 0,
943
  "tokenizer_class": "T5Tokenizer",
944
+ "truncation_side": "right",
945
+ "truncation_strategy": "longest_first",
946
  "unk_token": "<unk>"
947
  }