Hugging Face's logo Hugging Face
  • Models
  • Datasets
  • Spaces
  • Buckets new
  • Docs
  • Enterprise
  • Pricing

  • Log In
  • Sign Up

apple
/
MobileCLIP-S2-OpenCLIP

Zero-Shot Image Classification
OpenCLIP
Safetensors
clip
Model card Files Files and versions
xet
Community
7

Instructions to use apple/MobileCLIP-S2-OpenCLIP with libraries, inference providers, notebooks, and local apps. Follow these links to get started.

  • Libraries
  • OpenCLIP

    How to use apple/MobileCLIP-S2-OpenCLIP with OpenCLIP:

    import open_clip
    
    model, preprocess_train, preprocess_val = open_clip.create_model_and_transforms('hf-hub:apple/MobileCLIP-S2-OpenCLIP')
    tokenizer = open_clip.get_tokenizer('hf-hub:apple/MobileCLIP-S2-OpenCLIP')
  • Notebooks
  • Google Colab
  • Kaggle
MobileCLIP-S2-OpenCLIP
800 MB
Ctrl+K
Ctrl+K
  • 4 contributors
History: 7 commits
pcuenq's picture
pcuenq HF Staff
AMLR license (#6)
8e8a808 verified about 1 year ago
  • .gitattributes
    1.52 kB
    initial commit almost 2 years ago
  • LICENSE
    5.82 kB
    Add LICENSE about 1 year ago
  • README.md
    3 kB
    AMLR license (#6) about 1 year ago
  • fig_accuracy_latency.png
    437 kB
    Upload fig_accuracy_latency.png (#2) almost 2 years ago
  • merges.txt
    525 kB
    Add model almost 2 years ago
  • open_clip_config.json
    666 Bytes
    Add model almost 2 years ago
  • open_clip_model.safetensors
    398 MB
    xet
    Add model almost 2 years ago
  • open_clip_pytorch_model.bin
    398 MB
    xet
    Add model almost 2 years ago
  • special_tokens_map.json
    588 Bytes
    Add model almost 2 years ago
  • tokenizer.json
    2.22 MB
    Add model almost 2 years ago
  • tokenizer_config.json
    705 Bytes
    Add model almost 2 years ago
  • vocab.json
    862 kB
    Add model almost 2 years ago