Instructions to use JustFadjrin/batik-vit-model-classification with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use JustFadjrin/batik-vit-model-classification with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("image-classification", model="JustFadjrin/batik-vit-model-classification") pipe("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/hub/parrots.png")# Load model directly from transformers import AutoImageProcessor, AutoModelForImageClassification processor = AutoImageProcessor.from_pretrained("JustFadjrin/batik-vit-model-classification") model = AutoModelForImageClassification.from_pretrained("JustFadjrin/batik-vit-model-classification") - Notebooks
- Google Colab
- Kaggle
- Xet hash:
- d73960729a333fb8b90f6dd17b7834e7c1e5b33c5e77e9c37398b90f1b5a6d86
- Size of remote file:
- 5.27 kB
- SHA256:
- 5014926af151076c390ca67959e1747b032f71d4ef5e752cd6fa481813f0f1db
·
Xet efficiently stores Large Files inside Git, intelligently splitting files into unique chunks and accelerating uploads and downloads. More info.