Hemg commited on
Commit
b143576
·
verified ·
1 Parent(s): 1b4bb03

Model save

Browse files
README.md CHANGED
@@ -17,8 +17,8 @@ should probably proofread and complete it, then remove this comment. -->
17
 
18
  This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
19
  It achieves the following results on the evaluation set:
20
- - Loss: 0.7997
21
- - Accuracy: 0.9382
22
 
23
  ## Model description
24
 
@@ -46,15 +46,29 @@ The following hyperparameters were used during training:
46
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
47
  - lr_scheduler_type: linear
48
  - lr_scheduler_warmup_ratio: 0.1
49
- - num_epochs: 2
50
  - mixed_precision_training: Native AMP
51
 
52
  ### Training results
53
 
54
  | Training Loss | Epoch | Step | Validation Loss | Accuracy |
55
  |:-------------:|:-----:|:----:|:---------------:|:--------:|
56
- | 3.7533 | 0.99 | 68 | 1.6619 | 0.9041 |
57
- | 1.0021 | 1.98 | 136 | 0.7997 | 0.9382 |
 
 
 
 
 
 
 
 
 
 
 
 
 
 
58
 
59
 
60
  ### Framework versions
 
17
 
18
  This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
19
  It achieves the following results on the evaluation set:
20
+ - Loss: 0.1943
21
+ - Accuracy: 0.9527
22
 
23
  ## Model description
24
 
 
46
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
47
  - lr_scheduler_type: linear
48
  - lr_scheduler_warmup_ratio: 0.1
49
+ - num_epochs: 16
50
  - mixed_precision_training: Native AMP
51
 
52
  ### Training results
53
 
54
  | Training Loss | Epoch | Step | Validation Loss | Accuracy |
55
  |:-------------:|:-----:|:----:|:---------------:|:--------:|
56
+ | 5.2759 | 0.99 | 68 | 3.4103 | 0.86 |
57
+ | 1.7015 | 1.99 | 137 | 0.7510 | 0.8945 |
58
+ | 0.566 | 3.0 | 206 | 0.5793 | 0.8664 |
59
+ | 0.4301 | 4.0 | 275 | 0.4694 | 0.8909 |
60
+ | 0.3603 | 4.99 | 343 | 0.3995 | 0.9036 |
61
+ | 0.3032 | 5.99 | 412 | 0.3757 | 0.9036 |
62
+ | 0.2632 | 7.0 | 481 | 0.3843 | 0.9059 |
63
+ | 0.2211 | 8.0 | 550 | 0.3490 | 0.9123 |
64
+ | 0.1929 | 8.99 | 618 | 0.3618 | 0.9045 |
65
+ | 0.1645 | 9.99 | 687 | 0.2970 | 0.9241 |
66
+ | 0.1621 | 11.0 | 756 | 0.2874 | 0.93 |
67
+ | 0.1337 | 12.0 | 825 | 0.2705 | 0.9391 |
68
+ | 0.1238 | 12.99 | 893 | 0.2231 | 0.9436 |
69
+ | 0.1096 | 13.99 | 962 | 0.2440 | 0.9441 |
70
+ | 0.0979 | 15.0 | 1031 | 0.2371 | 0.9423 |
71
+ | 0.0808 | 15.83 | 1088 | 0.1943 | 0.9527 |
72
 
73
 
74
  ### Framework versions
runs/Mar17_11-51-17_1a8f1340d043/events.out.tfevents.1710676278.1a8f1340d043.34.5 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:77b95d879428897f777f8ce57c22c00ba29e512aa7bad300aa0005c404271a09
3
- size 44222
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:db238a4bb62816f2c7da07ab74dd6cbc3bf9879d958bfd1ea7df025fcb6de49b
3
+ size 44576