CoreML Build

Original model: google/vit-base-patch16-224

Converter vit_image_classification
Deployment Target iOS16
Format ML Program
Quantization fp16
Classes 1000
Image Size 224x224

Input

  • pixel_values: (1, 3, 224, 224)

Size

Metric Value
Parameters 86,567,656
Weights (est.) 165.1 MB
.mlpackage 165.3 MB
.mlmodelc 165.3 MB

Compatibility

This model requires iOS 16+ / macOS 13+ / tvOS 16+ / watchOS 9+.

Downloads last month
-
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for okigan/vit-base-patch16-224-coreml-ios16-fp16

Quantized
(10)
this model