new
Browse files
README.md
CHANGED
|
@@ -10,3 +10,57 @@ pinned: false
|
|
| 10 |
---
|
| 11 |
|
| 12 |
Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 10 |
---
|
| 11 |
|
| 12 |
Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
|
| 13 |
+
|
| 14 |
+
|
| 15 |
+
|
| 16 |
+
This documentation provides a detailed overview of the CLIP-based Image & Text Search App using Streamlit as the frontend and Pinecone for vector-based image retrieval.
|
| 17 |
+
|
| 18 |
+
<!-- project overview -->
|
| 19 |
+
This project enables text-to-image and image-to-image search using OpenAIβs CLIP model. It retrieves similar images from a Pinecone vector database.
|
| 20 |
+
<!-- features -->
|
| 21 |
+
Text-to-Image Search β Find images using text descriptions.
|
| 22 |
+
Image-to-Image Search β Upload an image to find visually similar images.
|
| 23 |
+
CLIP Model Integration β Uses OpenAI's CLIP (Contrastive Language-Image Pretraining).
|
| 24 |
+
Pinecone for Vector Search β Stores and retrieves image embeddings efficiently.
|
| 25 |
+
Streamlit Interface β Provides an interactive UI.
|
| 26 |
+
|
| 27 |
+
<!-- installation and dependencies -->
|
| 28 |
+
1. Ensure you have installed python installed
|
| 29 |
+
pip install -r requirements.txt
|
| 30 |
+
|
| 31 |
+
2. start run the streamlit
|
| 32 |
+
streamlit run app.py
|
| 33 |
+
|
| 34 |
+
<!-- Application workflow -->
|
| 35 |
+
1. Load CLIP Model & Processor β Initializes the CLIP model for text and image embeddings.
|
| 36 |
+
2. Generate Embeddings β Converts input text/images into 768-dimensional feature vectors.
|
| 37 |
+
3. Query Pinecone Database β Searches the Pinecone index for similar image embeddings.
|
| 38 |
+
4. Display Results β Shows matching images with similarity scores.
|
| 39 |
+
|
| 40 |
+
|
| 41 |
+
<!-- code in streamlit -->
|
| 42 |
+
|
| 43 |
+
This file contains:
|
| 44 |
+
|
| 45 |
+
UI Setup β Configures Streamlit layout & sidebar controls.
|
| 46 |
+
CLIP Model Initialization β Loads the pre-trained CLIP model.
|
| 47 |
+
Embedding Generation β Converts text and images into numerical vectors.
|
| 48 |
+
Pinecone Search Queries β Finds similar images based on embeddings.
|
| 49 |
+
Display Results β Shows search results in a grid layout.
|
| 50 |
+
|
| 51 |
+
|
| 52 |
+
<!-- setup details -->
|
| 53 |
+
1. Set Up Streamlit UI
|
| 54 |
+
2. Initialize Pinecone
|
| 55 |
+
3. Load CLIP Model & Processor
|
| 56 |
+
4. Sidebar for Input Controls
|
| 57 |
+
5. Text Search Processing
|
| 58 |
+
6. Image Search Processing
|
| 59 |
+
7. Function to Generate Text Embeddings
|
| 60 |
+
8. Function to Generate Image Embeddings
|
| 61 |
+
9. Pinecone Query to Find Similar Images
|
| 62 |
+
10. Displaying Search Results
|
| 63 |
+
11. Deployment on Hugging Face
|
| 64 |
+
|
| 65 |
+
<!-- conclusion -->
|
| 66 |
+
After deploying in hugging face the file runs and give the streamlit page of the CLIP-based Image & Text Search App.
|