Hemg commited on
Commit
1265604
·
verified ·
1 Parent(s): 5c172f2

Model save

Browse files
Files changed (1) hide show
  1. README.md +13 -13
README.md CHANGED
@@ -17,8 +17,8 @@ should probably proofread and complete it, then remove this comment. -->
17
 
18
  This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
19
  It achieves the following results on the evaluation set:
20
- - Loss: 1.7213
21
- - Accuracy: 0.4483
22
 
23
  ## Model description
24
 
@@ -38,11 +38,11 @@ More information needed
38
 
39
  The following hyperparameters were used during training:
40
  - learning_rate: 1e-05
41
- - train_batch_size: 8
42
- - eval_batch_size: 8
43
  - seed: 42
44
  - gradient_accumulation_steps: 4
45
- - total_train_batch_size: 32
46
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
47
  - lr_scheduler_type: linear
48
  - lr_scheduler_warmup_ratio: 0.1
@@ -52,14 +52,14 @@ The following hyperparameters were used during training:
52
 
53
  | Training Loss | Epoch | Step | Validation Loss | Accuracy |
54
  |:-------------:|:-----:|:----:|:---------------:|:--------:|
55
- | 2.0681 | 0.93 | 10 | 1.9095 | 0.2874 |
56
- | 1.831 | 1.95 | 21 | 1.8602 | 0.3793 |
57
- | 1.7667 | 2.98 | 32 | 1.8181 | 0.4023 |
58
- | 1.7262 | 4.0 | 43 | 1.7910 | 0.4598 |
59
- | 1.8574 | 4.93 | 53 | 1.7632 | 0.4598 |
60
- | 1.6504 | 5.95 | 64 | 1.7332 | 0.4828 |
61
- | 1.6352 | 6.98 | 75 | 1.7169 | 0.4828 |
62
- | 1.5997 | 7.44 | 80 | 1.7213 | 0.4483 |
63
 
64
 
65
  ### Framework versions
 
17
 
18
  This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
19
  It achieves the following results on the evaluation set:
20
+ - Loss: 1.0475
21
+ - Accuracy: 0.7586
22
 
23
  ## Model description
24
 
 
38
 
39
  The following hyperparameters were used during training:
40
  - learning_rate: 1e-05
41
+ - train_batch_size: 1
42
+ - eval_batch_size: 1
43
  - seed: 42
44
  - gradient_accumulation_steps: 4
45
+ - total_train_batch_size: 4
46
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
47
  - lr_scheduler_type: linear
48
  - lr_scheduler_warmup_ratio: 0.1
 
52
 
53
  | Training Loss | Epoch | Step | Validation Loss | Accuracy |
54
  |:-------------:|:-----:|:----:|:---------------:|:--------:|
55
+ | 1.8893 | 1.0 | 86 | 1.8192 | 0.4253 |
56
+ | 1.6679 | 2.0 | 172 | 1.5983 | 0.4828 |
57
+ | 1.4013 | 3.0 | 258 | 1.3888 | 0.6437 |
58
+ | 1.2347 | 4.0 | 344 | 1.2665 | 0.6782 |
59
+ | 1.0931 | 5.0 | 430 | 1.1818 | 0.6782 |
60
+ | 0.9913 | 6.0 | 516 | 1.1167 | 0.7241 |
61
+ | 0.9458 | 7.0 | 602 | 1.0819 | 0.7241 |
62
+ | 0.909 | 8.0 | 688 | 1.0475 | 0.7586 |
63
 
64
 
65
  ### Framework versions