Model save
Browse files
README.md
CHANGED
|
@@ -3,8 +3,6 @@ library_name: transformers
|
|
| 3 |
license: apache-2.0
|
| 4 |
base_model: google/vit-base-patch16-224-in21k
|
| 5 |
tags:
|
| 6 |
-
- image-classification
|
| 7 |
-
- vision
|
| 8 |
- generated_from_trainer
|
| 9 |
datasets:
|
| 10 |
- imagefolder
|
|
@@ -25,7 +23,7 @@ model-index:
|
|
| 25 |
metrics:
|
| 26 |
- name: Accuracy
|
| 27 |
type: accuracy
|
| 28 |
-
value: 0.
|
| 29 |
---
|
| 30 |
|
| 31 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
|
|
@@ -35,8 +33,8 @@ should probably proofread and complete it, then remove this comment. -->
|
|
| 35 |
|
| 36 |
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
|
| 37 |
It achieves the following results on the evaluation set:
|
| 38 |
-
- Loss: 0.
|
| 39 |
-
- Accuracy: 0.
|
| 40 |
|
| 41 |
## Model description
|
| 42 |
|
|
@@ -65,13 +63,15 @@ The following hyperparameters were used during training:
|
|
| 65 |
|
| 66 |
### Training results
|
| 67 |
|
| 68 |
-
| Training Loss | Epoch | Step | Validation Loss |
|
| 69 |
-
|
| 70 |
-
| 0.448 | 1.0 | 82 | 0.
|
| 71 |
-
| 0.5097 | 2.0 | 164 | 0.
|
| 72 |
-
| 0.452 | 3.0 | 246 | 0.
|
| 73 |
-
| 0.3885 | 4.0 | 328 | 0.
|
| 74 |
-
| 0.4743 | 5.0 | 410 | 0.
|
|
|
|
|
|
|
| 75 |
|
| 76 |
|
| 77 |
### Framework versions
|
|
|
|
| 3 |
license: apache-2.0
|
| 4 |
base_model: google/vit-base-patch16-224-in21k
|
| 5 |
tags:
|
|
|
|
|
|
|
| 6 |
- generated_from_trainer
|
| 7 |
datasets:
|
| 8 |
- imagefolder
|
|
|
|
| 23 |
metrics:
|
| 24 |
- name: Accuracy
|
| 25 |
type: accuracy
|
| 26 |
+
value: 0.8597560975609756
|
| 27 |
---
|
| 28 |
|
| 29 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
|
|
|
|
| 33 |
|
| 34 |
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
|
| 35 |
It achieves the following results on the evaluation set:
|
| 36 |
+
- Loss: 0.3956
|
| 37 |
+
- Accuracy: 0.8598
|
| 38 |
|
| 39 |
## Model description
|
| 40 |
|
|
|
|
| 63 |
|
| 64 |
### Training results
|
| 65 |
|
| 66 |
+
| Training Loss | Epoch | Step | Accuracy | Validation Loss |
|
| 67 |
+
|:-------------:|:-----:|:----:|:--------:|:---------------:|
|
| 68 |
+
| 0.448 | 1.0 | 82 | 0.7304 | 0.5725 |
|
| 69 |
+
| 0.5097 | 2.0 | 164 | 0.7652 | 0.4946 |
|
| 70 |
+
| 0.452 | 3.0 | 246 | 0.7565 | 0.4841 |
|
| 71 |
+
| 0.3885 | 4.0 | 328 | 0.7565 | 0.4812 |
|
| 72 |
+
| 0.4743 | 5.0 | 410 | 0.7739 | 0.4626 |
|
| 73 |
+
| 0.4749 | 4.0 | 464 | 0.4572 | 0.7988 |
|
| 74 |
+
| 0.4319 | 5.0 | 580 | 0.3956 | 0.8598 |
|
| 75 |
|
| 76 |
|
| 77 |
### Framework versions
|