stnleyyg commited on
Commit
bc7b3b0
·
verified ·
1 Parent(s): d0b8a35

End of training

Browse files
Files changed (2) hide show
  1. README.md +8 -44
  2. model.safetensors +1 -1
README.md CHANGED
@@ -6,24 +6,9 @@ tags:
6
  - generated_from_trainer
7
  datasets:
8
  - imagefolder
9
- metrics:
10
- - accuracy
11
  model-index:
12
  - name: image_classification
13
- results:
14
- - task:
15
- name: Image Classification
16
- type: image-classification
17
- dataset:
18
- name: imagefolder
19
- type: imagefolder
20
- config: default
21
- split: train
22
- args: default
23
- metrics:
24
- - name: Accuracy
25
- type: accuracy
26
- value: 0.4625
27
  ---
28
 
29
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
@@ -33,8 +18,13 @@ should probably proofread and complete it, then remove this comment. -->
33
 
34
  This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
35
  It achieves the following results on the evaluation set:
36
- - Loss: 1.4745
37
- - Accuracy: 0.4625
 
 
 
 
 
38
 
39
  ## Model description
40
 
@@ -64,32 +54,6 @@ The following hyperparameters were used during training:
64
  - lr_scheduler_warmup_ratio: 0.3
65
  - num_epochs: 20
66
 
67
- ### Training results
68
-
69
- | Training Loss | Epoch | Step | Validation Loss | Accuracy |
70
- |:-------------:|:-----:|:----:|:---------------:|:--------:|
71
- | 2.0497 | 1.0 | 10 | 1.9973 | 0.2938 |
72
- | 1.8686 | 2.0 | 20 | 1.9635 | 0.3 |
73
- | 1.825 | 3.0 | 30 | 1.9291 | 0.3125 |
74
- | 1.6801 | 4.0 | 40 | 1.7476 | 0.3688 |
75
- | 1.4192 | 5.0 | 50 | 1.6341 | 0.4125 |
76
- | 1.2182 | 6.0 | 60 | 1.6148 | 0.3937 |
77
- | 1.1688 | 7.0 | 70 | 1.5954 | 0.4 |
78
- | 1.0198 | 8.0 | 80 | 1.5394 | 0.4 |
79
- | 0.7465 | 9.0 | 90 | 1.4797 | 0.4313 |
80
- | 0.5733 | 10.0 | 100 | 1.4648 | 0.4 |
81
- | 0.5417 | 11.0 | 110 | 1.4613 | 0.3937 |
82
- | 0.4472 | 12.0 | 120 | 1.4418 | 0.4562 |
83
- | 0.3015 | 13.0 | 130 | 1.4745 | 0.4625 |
84
- | 0.2287 | 14.0 | 140 | 1.4175 | 0.45 |
85
- | 0.2154 | 15.0 | 150 | 1.4635 | 0.4562 |
86
- | 0.1787 | 16.0 | 160 | 1.4860 | 0.4437 |
87
- | 0.1312 | 17.0 | 170 | 1.5453 | 0.45 |
88
- | 0.1106 | 18.0 | 180 | 1.5976 | 0.4375 |
89
- | 0.1075 | 19.0 | 190 | 1.5866 | 0.4188 |
90
- | 0.0989 | 20.0 | 200 | 1.6112 | 0.4188 |
91
-
92
-
93
  ### Framework versions
94
 
95
  - Transformers 4.46.2
 
6
  - generated_from_trainer
7
  datasets:
8
  - imagefolder
 
 
9
  model-index:
10
  - name: image_classification
11
+ results: []
 
 
 
 
 
 
 
 
 
 
 
 
 
12
  ---
13
 
14
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
 
18
 
19
  This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
20
  It achieves the following results on the evaluation set:
21
+ - eval_loss: 2.0879
22
+ - eval_model_preparation_time: 0.0065
23
+ - eval_accuracy: 0.1187
24
+ - eval_runtime: 43.6598
25
+ - eval_samples_per_second: 3.665
26
+ - eval_steps_per_second: 0.115
27
+ - step: 0
28
 
29
  ## Model description
30
 
 
54
  - lr_scheduler_warmup_ratio: 0.3
55
  - num_epochs: 20
56
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
57
  ### Framework versions
58
 
59
  - Transformers 4.46.2
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:1e69874220a0f31e149d17e82fb7495c63aed85cb69d207d33615d7116a043f3
3
  size 343242432
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:be4e172a71f4f2e9026ad4e6c908f74d9e317638cea22a9164079ada40b6afb1
3
  size 343242432