dyaminda commited on
Commit
d4f40cf
·
1 Parent(s): c348f28

End of training

Browse files
Files changed (1) hide show
  1. README.md +11 -38
README.md CHANGED
@@ -4,7 +4,7 @@ base_model: google/vit-base-patch16-224-in21k
4
  tags:
5
  - generated_from_trainer
6
  datasets:
7
- - imagefolder
8
  metrics:
9
  - accuracy
10
  model-index:
@@ -14,15 +14,15 @@ model-index:
14
  name: Image Classification
15
  type: image-classification
16
  dataset:
17
- name: imagefolder
18
- type: imagefolder
19
  config: default
20
  split: train
21
  args: default
22
  metrics:
23
  - name: Accuracy
24
  type: accuracy
25
- value: 0.5375
26
  ---
27
 
28
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
@@ -30,10 +30,10 @@ should probably proofread and complete it, then remove this comment. -->
30
 
31
  # image_classification
32
 
33
- This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
34
  It achieves the following results on the evaluation set:
35
- - Loss: 1.3106
36
- - Accuracy: 0.5375
37
 
38
  ## Model description
39
 
@@ -61,42 +61,15 @@ The following hyperparameters were used during training:
61
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
62
  - lr_scheduler_type: linear
63
  - lr_scheduler_warmup_ratio: 0.1
64
- - num_epochs: 30
65
 
66
  ### Training results
67
 
68
  | Training Loss | Epoch | Step | Validation Loss | Accuracy |
69
  |:-------------:|:-----:|:----:|:---------------:|:--------:|
70
- | 1.367 | 1.0 | 10 | 1.5172 | 0.4437 |
71
- | 1.3337 | 2.0 | 20 | 1.5171 | 0.3875 |
72
- | 1.2582 | 3.0 | 30 | 1.4672 | 0.4813 |
73
- | 1.1813 | 4.0 | 40 | 1.3955 | 0.5 |
74
- | 1.1025 | 5.0 | 50 | 1.3617 | 0.5125 |
75
- | 1.0447 | 6.0 | 60 | 1.3491 | 0.5125 |
76
- | 0.9362 | 7.0 | 70 | 1.3079 | 0.5437 |
77
- | 0.8991 | 8.0 | 80 | 1.3291 | 0.4938 |
78
- | 0.8562 | 9.0 | 90 | 1.3445 | 0.4625 |
79
- | 0.7978 | 10.0 | 100 | 1.3168 | 0.5 |
80
- | 0.7183 | 11.0 | 110 | 1.2865 | 0.4875 |
81
- | 0.6941 | 12.0 | 120 | 1.2461 | 0.55 |
82
- | 0.6455 | 13.0 | 130 | 1.2330 | 0.6062 |
83
- | 0.6164 | 14.0 | 140 | 1.3818 | 0.4813 |
84
- | 0.5807 | 15.0 | 150 | 1.3250 | 0.5062 |
85
- | 0.5643 | 16.0 | 160 | 1.3206 | 0.5188 |
86
- | 0.5061 | 17.0 | 170 | 1.2957 | 0.5125 |
87
- | 0.478 | 18.0 | 180 | 1.3782 | 0.4625 |
88
- | 0.4615 | 19.0 | 190 | 1.2772 | 0.5563 |
89
- | 0.4514 | 20.0 | 200 | 1.2278 | 0.5375 |
90
- | 0.4196 | 21.0 | 210 | 1.2861 | 0.5188 |
91
- | 0.4589 | 22.0 | 220 | 1.2778 | 0.5375 |
92
- | 0.4303 | 23.0 | 230 | 1.2534 | 0.5687 |
93
- | 0.4023 | 24.0 | 240 | 1.3352 | 0.5312 |
94
- | 0.3967 | 25.0 | 250 | 1.3381 | 0.5375 |
95
- | 0.3725 | 26.0 | 260 | 1.2320 | 0.5625 |
96
- | 0.3757 | 27.0 | 270 | 1.2201 | 0.5625 |
97
- | 0.3913 | 28.0 | 280 | 1.2204 | 0.5563 |
98
- | 0.3982 | 29.0 | 290 | 1.2564 | 0.5437 |
99
- | 0.3765 | 30.0 | 300 | 1.3752 | 0.5 |
100
 
101
 
102
  ### Framework versions
 
4
  tags:
5
  - generated_from_trainer
6
  datasets:
7
+ - beans
8
  metrics:
9
  - accuracy
10
  model-index:
 
14
  name: Image Classification
15
  type: image-classification
16
  dataset:
17
+ name: beans
18
+ type: beans
19
  config: default
20
  split: train
21
  args: default
22
  metrics:
23
  - name: Accuracy
24
  type: accuracy
25
+ value: 0.9420289855072463
26
  ---
27
 
28
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
 
30
 
31
  # image_classification
32
 
33
+ This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the beans dataset.
34
  It achieves the following results on the evaluation set:
35
+ - Loss: 0.3869
36
+ - Accuracy: 0.9420
37
 
38
  ## Model description
39
 
 
61
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
62
  - lr_scheduler_type: linear
63
  - lr_scheduler_warmup_ratio: 0.1
64
+ - num_epochs: 3
65
 
66
  ### Training results
67
 
68
  | Training Loss | Epoch | Step | Validation Loss | Accuracy |
69
  |:-------------:|:-----:|:----:|:---------------:|:--------:|
70
+ | 0.9716 | 1.0 | 13 | 0.6693 | 0.9275 |
71
+ | 0.6674 | 2.0 | 26 | 0.4463 | 0.9565 |
72
+ | 0.5056 | 3.0 | 39 | 0.3736 | 0.9662 |
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
73
 
74
 
75
  ### Framework versions