| | ---
|
| | license: apache-2.0
|
| | base_model: google/vit-base-patch16-224
|
| | tags:
|
| | - generated_from_trainer
|
| | datasets:
|
| | - imagefolder
|
| | metrics:
|
| | - accuracy
|
| | model-index:
|
| | - name: vit-base-patch16-224-RU9-24
|
| | results:
|
| | - task:
|
| | name: Image Classification
|
| | type: image-classification
|
| | dataset:
|
| | name: imagefolder
|
| | type: imagefolder
|
| | config: default
|
| | split: validation
|
| | args: default
|
| | metrics:
|
| | - name: Accuracy
|
| | type: accuracy
|
| | value: 0.8431372549019608
|
| | ---
|
| |
|
| | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
|
| | should probably proofread and complete it, then remove this comment. -->
|
| |
|
| | # vit-base-patch16-224-RU9-24
|
| |
|
| | This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the imagefolder dataset.
|
| | It achieves the following results on the evaluation set:
|
| | - Loss: 0.5081
|
| | - Accuracy: 0.8431
|
| |
|
| | ## Model description
|
| |
|
| | More information needed
|
| |
|
| | ## Intended uses & limitations
|
| |
|
| | More information needed
|
| |
|
| | ## Training and evaluation data
|
| |
|
| | More information needed
|
| |
|
| | ## Training procedure
|
| |
|
| | ### Training hyperparameters
|
| |
|
| | The following hyperparameters were used during training:
|
| | - learning_rate: 5.5e-05
|
| | - train_batch_size: 32
|
| | - eval_batch_size: 32
|
| | - seed: 42
|
| | - gradient_accumulation_steps: 4
|
| | - total_train_batch_size: 128
|
| | - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
|
| | - lr_scheduler_type: linear
|
| | - lr_scheduler_warmup_ratio: 0.05
|
| | - num_epochs: 24
|
| |
|
| | ### Training results
|
| |
|
| | | Training Loss | Epoch | Step | Validation Loss | Accuracy |
|
| | |:-------------:|:-----:|:----:|:---------------:|:--------:|
|
| | | No log | 1.0 | 8 | 1.3401 | 0.5098 |
|
| | | 1.3685 | 2.0 | 16 | 1.2193 | 0.5686 |
|
| | | 1.2413 | 3.0 | 24 | 1.1150 | 0.5882 |
|
| | | 1.1126 | 4.0 | 32 | 0.9957 | 0.7059 |
|
| | | 0.9285 | 5.0 | 40 | 0.8976 | 0.6863 |
|
| | | 0.9285 | 6.0 | 48 | 0.8580 | 0.6863 |
|
| | | 0.7793 | 7.0 | 56 | 0.8426 | 0.7647 |
|
| | | 0.6291 | 8.0 | 64 | 0.7899 | 0.6863 |
|
| | | 0.5401 | 9.0 | 72 | 0.7169 | 0.7255 |
|
| | | 0.4358 | 10.0 | 80 | 0.7505 | 0.7255 |
|
| | | 0.4358 | 11.0 | 88 | 0.8077 | 0.7059 |
|
| | | 0.3901 | 12.0 | 96 | 0.6803 | 0.7647 |
|
| | | 0.3033 | 13.0 | 104 | 0.6483 | 0.7647 |
|
| | | 0.267 | 14.0 | 112 | 0.6451 | 0.7451 |
|
| | | 0.2212 | 15.0 | 120 | 0.6119 | 0.7647 |
|
| | | 0.2212 | 16.0 | 128 | 0.6150 | 0.8039 |
|
| | | 0.2206 | 17.0 | 136 | 0.6270 | 0.7843 |
|
| | | 0.2285 | 18.0 | 144 | 0.6181 | 0.7647 |
|
| | | 0.1741 | 19.0 | 152 | 0.5081 | 0.8431 |
|
| | | 0.1708 | 20.0 | 160 | 0.5502 | 0.8235 |
|
| | | 0.1708 | 21.0 | 168 | 0.5689 | 0.8039 |
|
| | | 0.16 | 22.0 | 176 | 0.5137 | 0.8235 |
|
| | | 0.1567 | 23.0 | 184 | 0.5207 | 0.8431 |
|
| | | 0.1616 | 24.0 | 192 | 0.5375 | 0.8235 |
|
| |
|
| |
|
| | ### Framework versions
|
| |
|
| | - Transformers 4.36.2
|
| | - Pytorch 2.1.2+cu118
|
| | - Datasets 2.16.1
|
| | - Tokenizers 0.15.0
|
| |
|