update model card README.md
Browse files
README.md
CHANGED
|
@@ -2,7 +2,6 @@
|
|
| 2 |
license: apache-2.0
|
| 3 |
base_model: google/vit-base-patch16-224
|
| 4 |
tags:
|
| 5 |
-
- image-classification
|
| 6 |
- generated_from_trainer
|
| 7 |
datasets:
|
| 8 |
- imagefolder
|
|
@@ -15,7 +14,7 @@ model-index:
|
|
| 15 |
name: Image Classification
|
| 16 |
type: image-classification
|
| 17 |
dataset:
|
| 18 |
-
name:
|
| 19 |
type: imagefolder
|
| 20 |
config: default
|
| 21 |
split: validation
|
|
@@ -23,7 +22,7 @@ model-index:
|
|
| 23 |
metrics:
|
| 24 |
- name: Accuracy
|
| 25 |
type: accuracy
|
| 26 |
-
value: 0.
|
| 27 |
---
|
| 28 |
|
| 29 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
|
|
@@ -31,10 +30,10 @@ should probably proofread and complete it, then remove this comment. -->
|
|
| 31 |
|
| 32 |
# vit-base
|
| 33 |
|
| 34 |
-
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the
|
| 35 |
It achieves the following results on the evaluation set:
|
| 36 |
-
- Loss: 0.
|
| 37 |
-
- Accuracy: 0.
|
| 38 |
|
| 39 |
## Model description
|
| 40 |
|
|
@@ -63,6 +62,9 @@ The following hyperparameters were used during training:
|
|
| 63 |
|
| 64 |
### Training results
|
| 65 |
|
|
|
|
|
|
|
|
|
|
| 66 |
|
| 67 |
|
| 68 |
### Framework versions
|
|
|
|
| 2 |
license: apache-2.0
|
| 3 |
base_model: google/vit-base-patch16-224
|
| 4 |
tags:
|
|
|
|
| 5 |
- generated_from_trainer
|
| 6 |
datasets:
|
| 7 |
- imagefolder
|
|
|
|
| 14 |
name: Image Classification
|
| 15 |
type: image-classification
|
| 16 |
dataset:
|
| 17 |
+
name: imagefolder
|
| 18 |
type: imagefolder
|
| 19 |
config: default
|
| 20 |
split: validation
|
|
|
|
| 22 |
metrics:
|
| 23 |
- name: Accuracy
|
| 24 |
type: accuracy
|
| 25 |
+
value: 0.75
|
| 26 |
---
|
| 27 |
|
| 28 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
|
|
|
|
| 30 |
|
| 31 |
# vit-base
|
| 32 |
|
| 33 |
+
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the imagefolder dataset.
|
| 34 |
It achieves the following results on the evaluation set:
|
| 35 |
+
- Loss: 0.7729
|
| 36 |
+
- Accuracy: 0.75
|
| 37 |
|
| 38 |
## Model description
|
| 39 |
|
|
|
|
| 62 |
|
| 63 |
### Training results
|
| 64 |
|
| 65 |
+
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|
| 66 |
+
|:-------------:|:-----:|:----:|:---------------:|:--------:|
|
| 67 |
+
| 0.0 | 50.0 | 100 | 0.7729 | 0.75 |
|
| 68 |
|
| 69 |
|
| 70 |
### Framework versions
|