Clip_Search / README.md
molehh's picture
modified file
fe1a29a
---
title: Clip Search
emoji: 🔥
colorFrom: gray
colorTo: purple
sdk: streamlit
sdk_version: 1.42.2
app_file: src/frontend/app.py
pinned: false
---
Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
# clip-search
📸Image Search with Pinecone and CLIP
🎈Overview
♦This project implements an image search engine using OpenAI's CLIP model and Pinecone for vector search. The application allows users to search for images either by text or by uploading an image.
Features
🎈Image-to-Image Search: Upload an image to find visually similar images.
🎈Text-to-Image Search: Enter a text query to find matching images.
🎈Pinecone Integration: Uses Pinecone as a vector database for fast similarity searches.
🎈Streamlit UI: A user-friendly interface for searching images.
🎈Dataset Management: Reads image metadata from a dataset and indexes it into Pinecone.
♦Users can upload an image, and the system finds visually similar images from the dataset.
♦The uploaded image is converted into an embedding using CLIP, and similar images are retrieved from Pinecone.
♦Users can enter a text query to find images matching the description.
♦The text is converted into an embedding, and Pinecone retrieves the closest image embeddings.
♦Uses Pinecone as a high-speed vector database.
♦Efficiently stores and retrieves embeddings for similarity searches.
♦Simple web-based interface for performing searches.
♦Supports both text and image-based queries.
♦Reads image metadata from a dataset (TSV file).
♦Extracts image embeddings using CLIP and stores them in Pinecone.