CLIP-Gradio-Demo / README.md
TYH71
docs: edit README
5d4c560
---
title: CLIP Gradio Demo
emoji: 🖼️
colorFrom: blue
colorTo: orange
sdk: gradio
sdk_version: 3.36.1
app_file: app.py
pinned: false
---
# CLIP Gradio Demo
> Repository to host CLIP (Contrastive Language Image Pretraining) on HuggingFace spaces
CLIP is an open source, multi-modal, zero-shot model. Given an image and text descriptions, the model can predict the most relevant text description for that image, without optimizing for a particular task (zero-shot).