uoft-cs/cifar100
Viewer • Updated • 60k • 32.7k • 62
How to use nickypro/vit-cifar100 with Transformers:
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("image-classification", model="nickypro/vit-cifar100")
pipe("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/hub/parrots.png") # Load model directly
from transformers import AutoImageProcessor, AutoModelForImageClassification
processor = AutoImageProcessor.from_pretrained("nickypro/vit-cifar100")
model = AutoModelForImageClassification.from_pretrained("nickypro/vit-cifar100")Note: This model is copied from Ahmed9275/Vit-Cifar100. See below for details
This model is a fine-tuned version of google/vit-base-patch16-224-in21k on the Cifar100 dataset. It achieves the following results on the evaluation set:
More information needed
More information needed
More information needed
The following hyperparameters were used during training:
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|---|---|---|---|---|
| 1.08 | 1.0 | 3125 | 0.6196 | 0.8262 |
| 0.3816 | 2.0 | 6250 | 0.5322 | 0.8555 |
| 0.1619 | 3.0 | 9375 | 0.4817 | 0.8765 |
| 0.0443 | 4.0 | 12500 | 0.4420 | 0.8985 |