uoft-cs/cifar10
Viewer • Updated • 60k • 128k • 105
How to use dhmeltzer/sagemaker-ViT-CIFAR10 with Transformers:
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("image-classification", model="dhmeltzer/sagemaker-ViT-CIFAR10")
pipe("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/hub/parrots.png") # Load model directly
from transformers import AutoImageProcessor, AutoModelForImageClassification
processor = AutoImageProcessor.from_pretrained("dhmeltzer/sagemaker-ViT-CIFAR10")
model = AutoModelForImageClassification.from_pretrained("dhmeltzer/sagemaker-ViT-CIFAR10")This model is a fine-tuned version of google/vit-base-patch16-224-in21k on the cifar10 dataset. It achieves the following results on the evaluation set:
More information needed
More information needed
More information needed
The following hyperparameters were used during training:
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|---|---|---|---|---|
| No log | 1.0 | 313 | 1.4582 | 0.9325 |
| 1.6494 | 2.0 | 626 | 0.4472 | 0.9665 |
| 1.6494 | 3.0 | 939 | 0.2966 | 0.972 |