Image Feature Extraction
Transformers
JAX
Safetensors
MLX
PyTorch
aimv2_vision_model
vision
custom_code
Eval Results (legacy)
Instructions to use apple/aimv2-1B-patch14-224 with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use apple/aimv2-1B-patch14-224 with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("image-feature-extraction", model="apple/aimv2-1B-patch14-224", trust_remote_code=True)# Load model directly from transformers import AutoImageProcessor, AutoModel processor = AutoImageProcessor.from_pretrained("apple/aimv2-1B-patch14-224", trust_remote_code=True) model = AutoModel.from_pretrained("apple/aimv2-1B-patch14-224", trust_remote_code=True) - MLX
How to use apple/aimv2-1B-patch14-224 with MLX:
# Download the model from the Hub pip install huggingface_hub[hf_xet] huggingface-cli download --local-dir aimv2-1B-patch14-224 apple/aimv2-1B-patch14-224
- Notebooks
- Google Colab
- Kaggle
- Local Apps
- LM Studio
Ctrl+K