Spaces:
Runtime error
Runtime error
Matan Kriel commited on
Commit ยท
980e0ef
1
Parent(s): b6d47fe
changed all files to the working files from working repo
Browse files- Assignment_3_Food_Match.ipynb +0 -0
- README.md +99 -56
- app.py +1 -1
Assignment_3_Food_Match.ipynb
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
README.md
CHANGED
|
@@ -1,35 +1,22 @@
|
|
| 1 |
---
|
| 2 |
-
title: Food
|
| 3 |
-
emoji:
|
| 4 |
-
colorFrom:
|
| 5 |
-
colorTo:
|
| 6 |
sdk: gradio
|
| 7 |
-
sdk_version:
|
| 8 |
app_file: app.py
|
| 9 |
pinned: false
|
| 10 |
-
license: mit
|
| 11 |
-
short_description: Trained model to detect and recommend similar foods.
|
| 12 |
---
|
| 13 |
|
| 14 |
# ๐ Visual Dish Matcher AI
|
| 15 |
|
| 16 |
-
**A computer vision app that suggests dishes based on visual
|
| 17 |
|
| 18 |
## ๐ฏ Project Overview
|
| 19 |
-
This project
|
| 20 |
|
| 21 |
-
|
| 22 |
-
|
| 23 |
-
---
|
| 24 |
-
|
| 25 |
-
## ๐ ๏ธ Tech Stack
|
| 26 |
-
* **Model:** OpenAI CLIP (`clip-vit-base-patch32`)
|
| 27 |
-
* **Frameworks:** PyTorch, Transformers, Datasets (Hugging Face)
|
| 28 |
-
* **Interface:** Gradio
|
| 29 |
-
* **Data Storage:** Parquet (via Git LFS)
|
| 30 |
-
* **Visualization:** Matplotlib, Seaborn, Scikit-Learn (t-SNE/PCA)
|
| 31 |
-
|
| 32 |
-
---
|
| 33 |
|
| 34 |
## DataSet - Food101
|
| 35 |
|
|
@@ -51,68 +38,124 @@ Labels: 101 unique Integer IDs mapped to human-readable Class Names.
|
|
| 51 |
|
| 52 |
---
|
| 53 |
|
| 54 |
-
|
| 55 |
-
**
|
| 56 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 57 |
|
| 58 |
-
### 1. Data
|
| 59 |
-
Before
|
| 60 |
-
* **Format Correction:** Converted distinct Grayscale images to RGB to ensure compatibility with the CLIP model.
|
| 61 |
-
* **Outlier Detection:** Analyzed image brightness and aspect ratios to identify and flag low-quality or distorted images (e.g., pitch-black photos or extreme panoramas).
|
| 62 |
|
| 63 |
-
|
| 64 |
-
We
|
|
|
|
| 65 |
|
| 66 |
|
| 67 |
-

|
| 74 |
|
| 75 |
-
|
| 76 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 77 |
|
| 78 |
-

|
| 79 |
|
| 80 |
---
|
| 81 |
|
| 82 |
-
##
|
| 83 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 84 |
|
| 85 |
-
###
|
| 86 |
-
|
| 87 |
-
* **Algorithm:** K-Means (k=50)
|
| 88 |
-
* **Dimensionality Reduction:** t-SNE (to visualize 512D vectors in 2D)
|
| 89 |
|
| 90 |
-
*
|
|
|
|
| 91 |
|
|
|
|
|
|
|
| 92 |
|
| 93 |
-

|
| 94 |
|
| 95 |
|
|
|
|
|
|
|
| 96 |
---
|
| 97 |
|
| 98 |
-
##
|
| 99 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 100 |
|
| 101 |
-
|
| 102 |
-
|
|
|
|
| 103 |
|
| 104 |
---
|
| 105 |
|
| 106 |
-
##
|
| 107 |
-
|
| 108 |
-
|
| 109 |
-
*
|
| 110 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 111 |
|
| 112 |
---
|
| 113 |
|
| 114 |
-
##
|
| 115 |
-
*
|
| 116 |
-
*
|
|
|
|
|
|
|
|
|
|
| 117 |
---
|
| 118 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
---
|
| 2 |
+
title: Food Matcher AI (SigLIP Edition)
|
| 3 |
+
emoji: ๐
|
| 4 |
+
colorFrom: green
|
| 5 |
+
colorTo: yellow
|
| 6 |
sdk: gradio
|
| 7 |
+
sdk_version: 5.0.0
|
| 8 |
app_file: app.py
|
| 9 |
pinned: false
|
|
|
|
|
|
|
| 10 |
---
|
| 11 |
|
| 12 |
# ๐ Visual Dish Matcher AI
|
| 13 |
|
| 14 |
+
**A computer vision app that suggests recipes and dishes based on visual similarity using Google's SigLIP model.**
|
| 15 |
|
| 16 |
## ๐ฏ Project Overview
|
| 17 |
+
This project builds a **Visual Search Engine** for food. Instead of relying on text labels (which can be inaccurate or missing), we use **Vector Embeddings** to find dishes that look similar.
|
| 18 |
|
| 19 |
+
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 20 |
|
| 21 |
## DataSet - Food101
|
| 22 |
|
|
|
|
| 38 |
|
| 39 |
---
|
| 40 |
|
| 41 |
+
**Key Features:**
|
| 42 |
+
* **Multimodal Search:** Find food using an image *or* a text description.
|
| 43 |
+
* **Advanced Data Cleaning:** Automated detection of blurry or low-quality images.
|
| 44 |
+
* **Model Comparison:** A scientific comparison between **OpenAI CLIP** and **Google SigLIP** to choose the best engine.
|
| 45 |
+
|
| 46 |
+
**Live Demo:** [Click "App" tab above to view]
|
| 47 |
+
|
| 48 |
+
---
|
| 49 |
+
|
| 50 |
+
## ๐ ๏ธ Tech Stack
|
| 51 |
+
* **Model:** Google SigLIP (`google/siglip-base-patch16-224`)
|
| 52 |
+
* **Frameworks:** PyTorch, Transformers, Gradio, Datasets
|
| 53 |
+
* **Data Engineering:** OpenCV (Feature Extraction), NumPy
|
| 54 |
+
* **Data Storage:** Parquet (via Git LFS)
|
| 55 |
+
* **Visualization:** Matplotlib, Seaborn, Scikit-Learn (t-SNE/PCA)
|
| 56 |
+
|
| 57 |
+
---
|
| 58 |
+
|
| 59 |
+
## ๐ Part 1: Data Analysis & Cleaning
|
| 60 |
+
**Dataset:** [Food-101 (ETH Zurich)](https://huggingface.co/datasets/ethz/food101) (Subset of 5,000 images).
|
| 61 |
|
| 62 |
+
### 1. Exploratory Data Analysis (EDA)
|
| 63 |
+
Before any modeling, we analyzed the raw data to ensure quality and balance.
|
|
|
|
|
|
|
| 64 |
|
| 65 |
+
* **Class Balance Check:** We verified that our random subset of 5,000 images maintained a healthy distribution across the 101 food categories (approx. 50 images per class).
|
| 66 |
+
* **Image Dimensions:** We visualized the width and height distribution to identify unusually small or large images.
|
| 67 |
+
* **Outlier Detection:** We plotted the distribution of **Aspect Ratios** and **Brightness Levels**.
|
| 68 |
|
| 69 |
|
| 70 |
+

|
| 71 |
|
|
|
|
|
|
|
| 72 |
|
| 73 |
+

|
| 74 |
|
|
|
|
| 75 |
|
| 76 |
+

|
| 77 |
|
| 78 |
+
### 2. Data Cleaning
|
| 79 |
+
Based on the plots above, **we deleted "bad" images** that were:
|
| 80 |
+
* Too Dark (Avg Pixel Intensity < 20)
|
| 81 |
+
* Too Bright/Washed out (Avg Pixel Intensity > 245)
|
| 82 |
+
* Extreme Aspect Ratios (Too stretched or squashed, AR > 3.0)
|
| 83 |
|
|
|
|
| 84 |
|
| 85 |
---
|
| 86 |
|
| 87 |
+
## โ๏ธ Part 2: Model Comparison (CLIP vs. SigLIP vs metaclip)
|
| 88 |
+
To ensure the best search results, we ran a "Challenger" test between three leading multimodal models.
|
| 89 |
+
|
| 90 |
+
### The Contestants:
|
| 91 |
+
1. **Baseline:** OpenAI CLIP (`clip-vit-base-patch32`)
|
| 92 |
+
2. **Challenger:** Google SigLIP (`siglip-base-patch16-224`)
|
| 93 |
+
3. **Challenger:** Facebook MetaCLIP": ("facebook/metaclip-b32-400m)
|
| 94 |
|
| 95 |
+
### The Evaluation:
|
| 96 |
+
We compared them using **Silhouette Scores** (measuring how distinct the food clusters are) and a visual "Taste Test" (checking nearest neighbors for specific dishes).
|
|
|
|
|
|
|
| 97 |
|
| 98 |
+
* **Metric:** Silhouette Score
|
| 99 |
+
* **Winner:** **Google SigLIP** (Produced cleaner, more distinct clusters and better visual matches).
|
| 100 |
|
| 101 |
+
**Visual Comparison:**
|
| 102 |
+
We queried both models with the same image to see which returned more accurate similar foods.
|
| 103 |
|
|
|
|
| 104 |
|
| 105 |
|
| 106 |
+

|
| 107 |
+
|
| 108 |
---
|
| 109 |
|
| 110 |
+
## ๐ง Part 3: Embeddings & Clustering
|
| 111 |
+
Using the winning model (**SigLIP**), We applied dimensionality reduction to visualize how the AI groups food concepts.
|
| 112 |
+
|
| 113 |
+
* **Algorithm:** K-Means Clustering (k=101 categories).
|
| 114 |
+
* **Visualization:**
|
| 115 |
+
* **PCA:** To see the global variance.
|
| 116 |
+
* **t-SNE:** To see local groupings (e.g., "Sushi" clusters separately from "Burgers").
|
| 117 |
|
| 118 |
+
|
| 119 |
+
|
| 120 |
+

|
| 121 |
|
| 122 |
---
|
| 123 |
|
| 124 |
+
## ๐ Part 4: The Application
|
| 125 |
+
The final product is a **Gradio** web application hosted on Hugging Face Spaces.
|
| 126 |
+
|
| 127 |
+
1. **Image-to-Image:** Upload a photo (e.g., a burger) -> The app embeds it using SigLIP -> Finds the nearest 3 visual matches.
|
| 128 |
+
2. **Text-to-Image:** Type "Spicy Tacos" -> The app finds images matching that description.
|
| 129 |
+
|
| 130 |
+
## Note
|
| 131 |
+
The application is running the clip model even though the sigLip model won, sigLip was to big to be run on the hugging face space free tier
|
| 132 |
+
|
| 133 |
+
### How to Run Locally
|
| 134 |
+
1. **Clone the repository:**
|
| 135 |
+
```bash
|
| 136 |
+
git clone [https://huggingface.co/spaces/YOUR_USERNAME/Food_Recommender](https://huggingface.co/spaces/YOUR_USERNAME/Food-Match)
|
| 137 |
+
cd Food-Match
|
| 138 |
+
```
|
| 139 |
+
2. **Install dependencies:**
|
| 140 |
+
```bash
|
| 141 |
+
pip install -r requirements.txt
|
| 142 |
+
```
|
| 143 |
+
3. **Run the app:**
|
| 144 |
+
```bash
|
| 145 |
+
python app.py
|
| 146 |
+
```
|
| 147 |
|
| 148 |
---
|
| 149 |
|
| 150 |
+
## ๐ Repository Structure
|
| 151 |
+
* `app.py`: Main application logic (Gradio + SigLIP).
|
| 152 |
+
* `food_embeddings.parquet`: Pre-computed vector database.
|
| 153 |
+
* `requirements.txt`: Python dependencies (includes `sentencepiece`, `protobuf`).
|
| 154 |
+
* `README.md`: Project documentation.
|
| 155 |
+
|
| 156 |
---
|
| 157 |
|
| 158 |
+
## โ๏ธ Authors
|
| 159 |
+
**Matan Kriel**
|
| 160 |
+
**Odeya Shmuel**
|
| 161 |
+
*Assignment #3: Embeddings, RecSys, and Spaces*
|
app.py
CHANGED
|
@@ -85,7 +85,7 @@ with gr.Blocks(title="Food Matcher AI") as demo:
|
|
| 85 |
gr.HTML("""
|
| 86 |
<div style="display: flex; justify-content: center;">
|
| 87 |
<iframe width="560" height="315"
|
| 88 |
-
src="https://www.youtube.com/
|
| 89 |
title="YouTube video player"
|
| 90 |
frameborder="0"
|
| 91 |
allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture"
|
|
|
|
| 85 |
gr.HTML("""
|
| 86 |
<div style="display: flex; justify-content: center;">
|
| 87 |
<iframe width="560" height="315"
|
| 88 |
+
src="https://www.youtube.com/embed/IXeIxYHi0Es"
|
| 89 |
title="YouTube video player"
|
| 90 |
frameborder="0"
|
| 91 |
allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture"
|