How to use from the
Use from the
Transformers library
# Use a pipeline as a high-level helper
from transformers import pipeline

pipe = pipeline("image-classification", model="WinKawaks/vit-tiny-patch16-224")
pipe("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/hub/parrots.png")
# Load model directly
from transformers import AutoImageProcessor, AutoModelForImageClassification

processor = AutoImageProcessor.from_pretrained("WinKawaks/vit-tiny-patch16-224")
model = AutoModelForImageClassification.from_pretrained("WinKawaks/vit-tiny-patch16-224")
Quick Links

Google didn't publish vit-tiny and vit-small model checkpoints in Hugging Face. I converted the weights from the timm repository. This model is used in the same way as ViT-base.

Note that [safetensors] model requires torch 2.0 environment.

Downloads last month
30,096
Safetensors
Model size
5.72M params
Tensor type
F32
Β·
Inference Providers NEW

Model tree for WinKawaks/vit-tiny-patch16-224

Adapters
1 model
Finetunes
45 models
Quantizations
4 models

Spaces using WinKawaks/vit-tiny-patch16-224 13