Hemg commited on
Commit
65d61b9
·
verified ·
1 Parent(s): 36594c5

Model save

Browse files
README.md CHANGED
@@ -17,8 +17,8 @@ should probably proofread and complete it, then remove this comment. -->
17
 
18
  This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
19
  It achieves the following results on the evaluation set:
20
- - Loss: 0.1728
21
- - Accuracy: 0.9544
22
 
23
  ## Model description
24
 
@@ -38,28 +38,36 @@ More information needed
38
 
39
  The following hyperparameters were used during training:
40
  - learning_rate: 4e-05
41
- - train_batch_size: 16
42
- - eval_batch_size: 16
43
  - seed: 42
44
- - gradient_accumulation_steps: 2
45
- - total_train_batch_size: 32
46
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
47
  - lr_scheduler_type: linear
48
  - lr_scheduler_warmup_ratio: 0.1
49
- - num_epochs: 8
50
 
51
  ### Training results
52
 
53
  | Training Loss | Epoch | Step | Validation Loss | Accuracy |
54
  |:-------------:|:-----:|:----:|:---------------:|:--------:|
55
- | 1.0978 | 0.99 | 52 | 0.6856 | 0.7602 |
56
- | 0.5021 | 2.0 | 105 | 0.3709 | 0.9041 |
57
- | 0.3131 | 2.99 | 157 | 0.2680 | 0.9281 |
58
- | 0.226 | 4.0 | 210 | 0.2189 | 0.9424 |
59
- | 0.1682 | 4.99 | 262 | 0.2105 | 0.9376 |
60
- | 0.155 | 6.0 | 315 | 0.1873 | 0.9448 |
61
- | 0.127 | 6.99 | 367 | 0.1635 | 0.9568 |
62
- | 0.1227 | 7.92 | 416 | 0.1728 | 0.9544 |
 
 
 
 
 
 
 
 
63
 
64
 
65
  ### Framework versions
 
17
 
18
  This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
19
  It achieves the following results on the evaluation set:
20
+ - Loss: 0.1972
21
+ - Accuracy: 0.9472
22
 
23
  ## Model description
24
 
 
38
 
39
  The following hyperparameters were used during training:
40
  - learning_rate: 4e-05
41
+ - train_batch_size: 32
42
+ - eval_batch_size: 32
43
  - seed: 42
44
+ - gradient_accumulation_steps: 4
45
+ - total_train_batch_size: 128
46
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
47
  - lr_scheduler_type: linear
48
  - lr_scheduler_warmup_ratio: 0.1
49
+ - num_epochs: 16
50
 
51
  ### Training results
52
 
53
  | Training Loss | Epoch | Step | Validation Loss | Accuracy |
54
  |:-------------:|:-----:|:----:|:---------------:|:--------:|
55
+ | 1.3603 | 0.98 | 13 | 1.1922 | 0.5444 |
56
+ | 1.0671 | 1.96 | 26 | 0.8661 | 0.7026 |
57
+ | 0.7531 | 2.94 | 39 | 0.6021 | 0.8393 |
58
+ | 0.4923 | 4.0 | 53 | 0.4488 | 0.8849 |
59
+ | 0.3921 | 4.98 | 66 | 0.3764 | 0.9113 |
60
+ | 0.3284 | 5.96 | 79 | 0.3130 | 0.9233 |
61
+ | 0.3001 | 6.94 | 92 | 0.3303 | 0.9041 |
62
+ | 0.2354 | 8.0 | 106 | 0.2644 | 0.9305 |
63
+ | 0.2283 | 8.98 | 119 | 0.2602 | 0.9400 |
64
+ | 0.2131 | 9.96 | 132 | 0.2318 | 0.9472 |
65
+ | 0.2029 | 10.94 | 145 | 0.2197 | 0.9448 |
66
+ | 0.1663 | 12.0 | 159 | 0.2352 | 0.9305 |
67
+ | 0.1716 | 12.98 | 172 | 0.2014 | 0.9544 |
68
+ | 0.1609 | 13.96 | 185 | 0.2081 | 0.9472 |
69
+ | 0.1633 | 14.94 | 198 | 0.2030 | 0.9472 |
70
+ | 0.1424 | 15.7 | 208 | 0.1972 | 0.9472 |
71
 
72
 
73
  ### Framework versions
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:ed2d1f46008694640155d7495242f17b014bf4ae786b05be5eb8ef949f706900
3
  size 343230128
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:03daaebffd2bcfcd318620d71653e336bada8083eace2f6d3cb5af40bd57c9f4
3
  size 343230128
runs/Mar06_07-36-46_65f3a70606cc/events.out.tfevents.1709710607.65f3a70606cc.34.1 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:1aedf9a11e5e4925cd6b8af28c39daf57434246f82fa1e95ea46e0c22e417fd5
3
- size 13134
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:09ff9dc482861cf01251e1cb9016b8630e535f83afcd9e59b3e30d4080a28531
3
+ size 13488