Spaces:
Runtime error
Runtime error
| title: CLIP Gradio Demo | |
| emoji: 🖼️ | |
| colorFrom: blue | |
| colorTo: orange | |
| sdk: gradio | |
| sdk_version: 3.36.1 | |
| app_file: app.py | |
| pinned: false | |
| # CLIP Gradio Demo | |
| > Repository to host CLIP (Contrastive Language Image Pretraining) on HuggingFace spaces | |
| CLIP is an open source, multi-modal, zero-shot model. Given an image and text descriptions, the model can predict the most relevant text description for that image, without optimizing for a particular task (zero-shot). | |