HanSSH commited on
Commit
59083ec
·
1 Parent(s): 81b1c6b

Training in progress epoch 0

Browse files
.ipynb_checkpoints/README-checkpoint.md ADDED
@@ -0,0 +1,60 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ tags:
4
+ - generated_from_keras_callback
5
+ model-index:
6
+ - name: HanSSH/mt5-small-finetuned-amazon-en-es
7
+ results: []
8
+ ---
9
+
10
+ <!-- This model card has been generated automatically according to the information Keras had access to. You should
11
+ probably proofread and complete it, then remove this comment. -->
12
+
13
+ # HanSSH/mt5-small-finetuned-amazon-en-es
14
+
15
+ This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on an unknown dataset.
16
+ It achieves the following results on the evaluation set:
17
+ - Train Loss: 1.7231
18
+ - Validation Loss: 1.5811
19
+ - Epoch: 7
20
+
21
+ ## Model description
22
+
23
+ More information needed
24
+
25
+ ## Intended uses & limitations
26
+
27
+ More information needed
28
+
29
+ ## Training and evaluation data
30
+
31
+ More information needed
32
+
33
+ ## Training procedure
34
+
35
+ ### Training hyperparameters
36
+
37
+ The following hyperparameters were used during training:
38
+ - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5.6e-05, 'decay_steps': 9672, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
39
+ - training_precision: mixed_float16
40
+
41
+ ### Training results
42
+
43
+ | Train Loss | Validation Loss | Epoch |
44
+ |:----------:|:---------------:|:-----:|
45
+ | 12.6850 | 2.6962 | 0 |
46
+ | 2.9141 | 2.3047 | 1 |
47
+ | 2.3124 | 2.1101 | 2 |
48
+ | 2.0972 | 1.9355 | 3 |
49
+ | 1.9639 | 1.8009 | 4 |
50
+ | 1.8591 | 1.7010 | 5 |
51
+ | 1.7735 | 1.6233 | 6 |
52
+ | 1.7231 | 1.5811 | 7 |
53
+
54
+
55
+ ### Framework versions
56
+
57
+ - Transformers 4.21.3
58
+ - TensorFlow 2.10.0
59
+ - Datasets 2.4.0
60
+ - Tokenizers 0.12.1
.ipynb_checkpoints/config-checkpoint.json ADDED
@@ -0,0 +1,30 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "google/mt5-small",
3
+ "architectures": [
4
+ "MT5ForConditionalGeneration"
5
+ ],
6
+ "d_ff": 1024,
7
+ "d_kv": 64,
8
+ "d_model": 512,
9
+ "decoder_start_token_id": 0,
10
+ "dense_act_fn": "gelu_new",
11
+ "dropout_rate": 0.1,
12
+ "eos_token_id": 1,
13
+ "feed_forward_proj": "gated-gelu",
14
+ "initializer_factor": 1.0,
15
+ "is_encoder_decoder": true,
16
+ "is_gated_act": true,
17
+ "layer_norm_epsilon": 1e-06,
18
+ "model_type": "mt5",
19
+ "num_decoder_layers": 8,
20
+ "num_heads": 6,
21
+ "num_layers": 8,
22
+ "pad_token_id": 0,
23
+ "relative_attention_max_distance": 128,
24
+ "relative_attention_num_buckets": 32,
25
+ "tie_word_embeddings": false,
26
+ "tokenizer_class": "T5Tokenizer",
27
+ "transformers_version": "4.21.3",
28
+ "use_cache": true,
29
+ "vocab_size": 250112
30
+ }
README.md CHANGED
@@ -14,9 +14,9 @@ probably proofread and complete it, then remove this comment. -->
14
 
15
  This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on an unknown dataset.
16
  It achieves the following results on the evaluation set:
17
- - Train Loss: 1.7231
18
- - Validation Loss: 1.5811
19
- - Epoch: 7
20
 
21
  ## Model description
22
 
@@ -35,21 +35,14 @@ More information needed
35
  ### Training hyperparameters
36
 
37
  The following hyperparameters were used during training:
38
- - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5.6e-05, 'decay_steps': 9672, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
39
  - training_precision: mixed_float16
40
 
41
  ### Training results
42
 
43
  | Train Loss | Validation Loss | Epoch |
44
  |:----------:|:---------------:|:-----:|
45
- | 12.6850 | 2.6962 | 0 |
46
- | 2.9141 | 2.3047 | 1 |
47
- | 2.3124 | 2.1101 | 2 |
48
- | 2.0972 | 1.9355 | 3 |
49
- | 1.9639 | 1.8009 | 4 |
50
- | 1.8591 | 1.7010 | 5 |
51
- | 1.7735 | 1.6233 | 6 |
52
- | 1.7231 | 1.5811 | 7 |
53
 
54
 
55
  ### Framework versions
 
14
 
15
  This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on an unknown dataset.
16
  It achieves the following results on the evaluation set:
17
+ - Train Loss: 10.8170
18
+ - Validation Loss: 4.9147
19
+ - Epoch: 0
20
 
21
  ## Model description
22
 
 
35
  ### Training hyperparameters
36
 
37
  The following hyperparameters were used during training:
38
+ - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5.6e-05, 'decay_steps': 2418, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
39
  - training_precision: mixed_float16
40
 
41
  ### Training results
42
 
43
  | Train Loss | Validation Loss | Epoch |
44
  |:----------:|:---------------:|:-----:|
45
+ | 10.8170 | 4.9147 | 0 |
 
 
 
 
 
 
 
46
 
47
 
48
  ### Framework versions
tf_model.h5 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:eb0203d4b6a6bf5e8b4d4099d50984df8c4f6e4d8f0bb4b0af31133d23aa35d4
3
  size 1201094528
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:02662779d35424ec4df16e77d8898f4cd3651cff1010b7caf8679ede6b9b67dc
3
  size 1201094528
tokenizer.json CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:b5baf8c1abc460afe5e7ca79cf84f74686db46217f874a8fb735f4981e7758ed
3
- size 16330621
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:cd38144ed4e51886247b55a7fd9b6fcca7afd7355345b522149ba074b455f2cb
3
+ size 16330466