Instructions to use apple/deeplabv3-mobilevit-small with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use apple/deeplabv3-mobilevit-small with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("image-segmentation", model="apple/deeplabv3-mobilevit-small")# Load model directly from transformers import AutoImageProcessor, MobileViTForSemanticSegmentation processor = AutoImageProcessor.from_pretrained("apple/deeplabv3-mobilevit-small") model = MobileViTForSemanticSegmentation.from_pretrained("apple/deeplabv3-mobilevit-small") - Inference
- Notebooks
- Google Colab
- Kaggle
Add TF weights
#1
by sayakpaul HF Staff - opened
Model converted by the transformers' pt_to_tf CLI. All converted model outputs and hidden layers were validated against its Pytorch counterpart.
Maximum crossload output difference=2.029e-04; Maximum crossload hidden layer difference=9.155e-05;
Maximum conversion output difference=2.029e-04; Maximum conversion hidden layer difference=9.155e-05;
CAUTION: The maximum admissible error was manually increased to 0.0003!
@joaogante @lysandre relevant discussion as to why the threshold had to be adjusted: https://github.com/huggingface/transformers/pull/18555#issuecomment-1229703811
Matthijs changed pull request status to merged