CoreML Build

Original model: google/vit-base-patch16-224

Converter vit_image_classification
Deployment Target iOS16
Format ML Program
Quantization int8
Classes 1000
Image Size 224x224

Input

  • pixel_values: (1, 3, 224, 224)

Size

Metric Value
Parameters 86,567,656
Weights (est.) 82.6 MB
.mlpackage 83.0 MB
.mlmodelc 83.1 MB

Compatibility

This model requires iOS 16+ / macOS 13+ / tvOS 16+ / watchOS 9+.

Downloads last month
-
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for okigan/vit-base-patch16-224-coreml-ios16-int8

Quantized
(10)
this model