Instructions to use bn22/naflexvit_small_patch16 with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- timm
How to use bn22/naflexvit_small_patch16 with timm:
import timm model = timm.create_model("hf_hub:bn22/naflexvit_small_patch16", pretrained=True) - Transformers
How to use bn22/naflexvit_small_patch16 with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("image-classification", model="bn22/naflexvit_small_patch16") pipe("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/hub/parrots.png")# Load model directly from transformers import AutoModel model = AutoModel.from_pretrained("bn22/naflexvit_small_patch16", dtype="auto") - Notebooks
- Google Colab
- Kaggle
File size: 174 Bytes
2ad8c7f | 1 2 3 4 5 6 7 8 9 10 11 | ---
tags:
- image-classification
- timm
- transformers
pipeline_tag: image-classification
library_name: timm
license: apache-2.0
---
# Model card for naflexvit_small_patch16
|