Hemg commited on
Commit
21922fe
·
verified ·
1 Parent(s): 55b30e5

Model save

Browse files
Files changed (1) hide show
  1. README.md +23 -16
README.md CHANGED
@@ -17,8 +17,8 @@ should probably proofread and complete it, then remove this comment. -->
17
 
18
  This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
19
  It achieves the following results on the evaluation set:
20
- - Loss: 0.3796
21
- - Accuracy: 0.8912
22
 
23
  ## Model description
24
 
@@ -38,34 +38,41 @@ More information needed
38
 
39
  The following hyperparameters were used during training:
40
  - learning_rate: 0.0002
41
- - train_batch_size: 8
42
- - eval_batch_size: 8
43
  - seed: 42
44
  - gradient_accumulation_steps: 2
45
- - total_train_batch_size: 16
46
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
47
  - lr_scheduler_type: linear
48
- - lr_scheduler_warmup_ratio: 0.1
49
  - num_epochs: 16
50
 
51
  ### Training results
52
 
53
  | Training Loss | Epoch | Step | Validation Loss | Accuracy |
54
  |:-------------:|:-----:|:----:|:---------------:|:--------:|
55
- | 1.6735 | 1.0 | 147 | 1.0660 | 0.7160 |
56
- | 0.8271 | 2.0 | 294 | 0.7849 | 0.7381 |
57
- | 0.5955 | 3.0 | 441 | 0.6233 | 0.7976 |
58
- | 0.4502 | 4.0 | 588 | 0.5057 | 0.8469 |
59
- | 0.3554 | 5.0 | 735 | 0.5081 | 0.8418 |
60
- | 0.2961 | 6.0 | 882 | 0.4161 | 0.8929 |
61
- | 0.2378 | 7.0 | 1029 | 0.4914 | 0.8656 |
62
- | 0.2063 | 8.0 | 1176 | 0.4416 | 0.8673 |
63
- | 0.2074 | 9.0 | 1323 | 0.3796 | 0.8912 |
 
 
 
 
 
 
 
64
 
65
 
66
  ### Framework versions
67
 
68
  - Transformers 4.38.2
69
- - Pytorch 2.1.2
70
  - Datasets 2.18.0
71
  - Tokenizers 0.15.2
 
17
 
18
  This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
19
  It achieves the following results on the evaluation set:
20
+ - Loss: 0.1209
21
+ - Accuracy: 0.965
22
 
23
  ## Model description
24
 
 
38
 
39
  The following hyperparameters were used during training:
40
  - learning_rate: 0.0002
41
+ - train_batch_size: 4
42
+ - eval_batch_size: 4
43
  - seed: 42
44
  - gradient_accumulation_steps: 2
45
+ - total_train_batch_size: 8
46
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
47
  - lr_scheduler_type: linear
48
+ - lr_scheduler_warmup_ratio: 0.01
49
  - num_epochs: 16
50
 
51
  ### Training results
52
 
53
  | Training Loss | Epoch | Step | Validation Loss | Accuracy |
54
  |:-------------:|:-----:|:----:|:---------------:|:--------:|
55
+ | 1.0919 | 1.0 | 200 | 0.7780 | 0.76 |
56
+ | 0.6157 | 2.0 | 400 | 0.5695 | 0.7925 |
57
+ | 0.4894 | 3.0 | 600 | 0.3667 | 0.8775 |
58
+ | 0.3786 | 4.0 | 800 | 0.4436 | 0.8625 |
59
+ | 0.3142 | 5.0 | 1000 | 0.4412 | 0.8625 |
60
+ | 0.2636 | 6.0 | 1200 | 0.4430 | 0.86 |
61
+ | 0.198 | 7.0 | 1400 | 0.2760 | 0.9175 |
62
+ | 0.1456 | 8.0 | 1600 | 0.2211 | 0.93 |
63
+ | 0.1586 | 9.0 | 1800 | 0.3520 | 0.905 |
64
+ | 0.1307 | 10.0 | 2000 | 0.3188 | 0.9175 |
65
+ | 0.106 | 11.0 | 2200 | 0.3167 | 0.925 |
66
+ | 0.0975 | 12.0 | 2400 | 0.2633 | 0.92 |
67
+ | 0.0734 | 13.0 | 2600 | 0.1813 | 0.9525 |
68
+ | 0.0994 | 14.0 | 2800 | 0.2150 | 0.945 |
69
+ | 0.0622 | 15.0 | 3000 | 0.1757 | 0.955 |
70
+ | 0.0609 | 16.0 | 3200 | 0.1209 | 0.965 |
71
 
72
 
73
  ### Framework versions
74
 
75
  - Transformers 4.38.2
76
+ - Pytorch 2.1.0+cu121
77
  - Datasets 2.18.0
78
  - Tokenizers 0.15.2