andakm commited on
Commit
fe0120a
·
1 Parent(s): 1df1db4

End of training

Browse files
Files changed (3) hide show
  1. README.md +16 -15
  2. config.json +1 -1
  3. tf_model.h5 +1 -1
README.md CHANGED
@@ -15,7 +15,8 @@ probably proofread and complete it, then remove this comment. -->
15
 
16
  This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
17
  It achieves the following results on the evaluation set:
18
- - Train Loss: 0.1769
 
19
  - Epoch: 9
20
 
21
  ## Model description
@@ -35,28 +36,28 @@ More information needed
35
  ### Training hyperparameters
36
 
37
  The following hyperparameters were used during training:
38
- - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 3e-05, 'decay_steps': 2550, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
39
  - training_precision: float32
40
 
41
  ### Training results
42
 
43
- | Train Loss | Epoch |
44
- |:----------:|:-----:|
45
- | 0.4562 | 0 |
46
- | 0.3993 | 1 |
47
- | 0.3517 | 2 |
48
- | 0.3186 | 3 |
49
- | 0.2959 | 4 |
50
- | 0.2589 | 5 |
51
- | 0.2289 | 6 |
52
- | 0.2065 | 7 |
53
- | 0.1904 | 8 |
54
- | 0.1769 | 9 |
55
 
56
 
57
  ### Framework versions
58
 
59
- - Transformers 4.36.1
60
  - TensorFlow 2.15.0
61
  - Datasets 2.15.0
62
  - Tokenizers 0.15.0
 
15
 
16
  This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
17
  It achieves the following results on the evaluation set:
18
+ - Train Loss: 0.4037
19
+ - Train Accuracy: 0.8039
20
  - Epoch: 9
21
 
22
  ## Model description
 
36
  ### Training hyperparameters
37
 
38
  The following hyperparameters were used during training:
39
+ - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 3e-05, 'decay_steps': 4080, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
40
  - training_precision: float32
41
 
42
  ### Training results
43
 
44
+ | Train Loss | Train Accuracy | Epoch |
45
+ |:----------:|:--------------:|:-----:|
46
+ | 2.0951 | 0.3039 | 0 |
47
+ | 1.8395 | 0.4902 | 1 |
48
+ | 1.6157 | 0.4902 | 2 |
49
+ | 1.3806 | 0.5686 | 3 |
50
+ | 1.1369 | 0.6569 | 4 |
51
+ | 0.9272 | 0.6667 | 5 |
52
+ | 0.7538 | 0.7157 | 6 |
53
+ | 0.6106 | 0.7451 | 7 |
54
+ | 0.4929 | 0.7843 | 8 |
55
+ | 0.4037 | 0.8039 | 9 |
56
 
57
 
58
  ### Framework versions
59
 
60
+ - Transformers 4.36.2
61
  - TensorFlow 2.15.0
62
  - Datasets 2.15.0
63
  - Tokenizers 0.15.0
config.json CHANGED
@@ -42,5 +42,5 @@
42
  "num_hidden_layers": 12,
43
  "patch_size": 16,
44
  "qkv_bias": true,
45
- "transformers_version": "4.36.1"
46
  }
 
42
  "num_hidden_layers": 12,
43
  "patch_size": 16,
44
  "qkv_bias": true,
45
+ "transformers_version": "4.36.2"
46
  }
tf_model.h5 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:dd0d91fa681683c8b33d18350c8c5e7c47691f7c872a418d28faf894fdd05d3d
3
  size 343494328
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6524ab6ea2643de88b44d357d184c61c0daf59d75770f1eeb0299068fffef79f
3
  size 343494328