Upload README.md with huggingface_hub
Browse files
README.md
ADDED
|
@@ -0,0 +1,332 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Fashion Search Model - GAP-CLIP
|
| 2 |
+
|
| 3 |
+
Multimodal search model for fashion, combining color embeddings, categorical hierarchy embeddings, and a main CLIP model for fashion item search.
|
| 4 |
+
|
| 5 |
+
## π Description
|
| 6 |
+
|
| 7 |
+
This project implements an advanced fashion search system based on CLIP, with three specialized models:
|
| 8 |
+
|
| 9 |
+
1. **Color Model** (`color_model.pt`) : Specialized CLIP model for extracting reduced-size color embeddings from text and images
|
| 10 |
+
2. **Hierarchy Model** (`hierarchy_model.pth`) : Model for classifying and encoding reduced-size categorical hierarchy of fashion items
|
| 11 |
+
3. **Main CLIP Model** (`gap_clip.pth`) : Main CLIP model based on LAION, trained with color and hierarchy embeddings
|
| 12 |
+
|
| 13 |
+
### Architecture
|
| 14 |
+
|
| 15 |
+
The main model combines:
|
| 16 |
+
- **16 dimensions** : Color embeddings (first 16 dimensions)
|
| 17 |
+
- **64 dimensions** : Hierarchy embeddings (dimensions 16-80)
|
| 18 |
+
- **512 dimensions** : Standard CLIP embeddings (following dimensions)
|
| 19 |
+
|
| 20 |
+
Total : **512 dimensions** for each embedding
|
| 21 |
+
|
| 22 |
+
## π Installation
|
| 23 |
+
|
| 24 |
+
### Prerequisites
|
| 25 |
+
|
| 26 |
+
- Python 3.8+
|
| 27 |
+
- PyTorch 2.0+
|
| 28 |
+
- CUDA (optional, for GPU)
|
| 29 |
+
|
| 30 |
+
### Installing Dependencies
|
| 31 |
+
|
| 32 |
+
```bash
|
| 33 |
+
pip install -r requirements.txt
|
| 34 |
+
```
|
| 35 |
+
|
| 36 |
+
### Main Dependencies
|
| 37 |
+
|
| 38 |
+
- `torch>=2.0.0` : PyTorch for deep learning
|
| 39 |
+
- `transformers>=4.30.0` : Hugging Face Transformers for CLIP
|
| 40 |
+
- `huggingface-hub>=0.16.0` : To download/upload models
|
| 41 |
+
- `pillow>=9.0.0` : Image processing
|
| 42 |
+
- `pandas>=1.5.0` : Data manipulation
|
| 43 |
+
- `scikit-learn>=1.3.0` : Evaluation metrics
|
| 44 |
+
|
| 45 |
+
## π Project Structure
|
| 46 |
+
|
| 47 |
+
```
|
| 48 |
+
.
|
| 49 |
+
βββ color_model.py # Color model definition
|
| 50 |
+
βββ hierarchy_model.py # Hierarchy model definition
|
| 51 |
+
βββ main_model.py # Main CLIP model and training functions
|
| 52 |
+
βββ config.py # Configuration for paths and parameters
|
| 53 |
+
βββ tokenizer_vocab.json # Tokenizer vocabulary
|
| 54 |
+
βββ example_usage.py # Usage examples
|
| 55 |
+
βββ Models/
|
| 56 |
+
β βββ color_model.pt # Trained color model
|
| 57 |
+
β βββ hierarchy_model.pth # Trained hierarchy model
|
| 58 |
+
β βββ gap_clip.pth # Main CLIP model
|
| 59 |
+
βββ Evaluation/ # Evaluation scripts
|
| 60 |
+
β βββ evaluate_color_embeddings.py
|
| 61 |
+
β βββ main_model_evaluation.py
|
| 62 |
+
β βββ ...
|
| 63 |
+
βββ Data/ # Training data
|
| 64 |
+
β βββ download_data.py
|
| 65 |
+
βββ requirements.txt # Python dependencies
|
| 66 |
+
βββ README.md # This documentation
|
| 67 |
+
```
|
| 68 |
+
|
| 69 |
+
## π§ Configuration
|
| 70 |
+
|
| 71 |
+
Main parameters are defined in `config.py`:
|
| 72 |
+
|
| 73 |
+
```python
|
| 74 |
+
color_emb_dim = 16 # Color embedding dimension
|
| 75 |
+
hierarchy_emb_dim = 64 # Hierarchy embedding dimension
|
| 76 |
+
device = torch.device("mps") # Device (cuda, mps, cpu)
|
| 77 |
+
```
|
| 78 |
+
|
| 79 |
+
### Model Paths
|
| 80 |
+
|
| 81 |
+
Default paths are:
|
| 82 |
+
- `Models/color_model.pt` : Color model
|
| 83 |
+
- `Models/hierarchy_model.pth` : Hierarchy model
|
| 84 |
+
- `Models/gap_clip.pth` : Main CLIP model
|
| 85 |
+
- `tokenizer_vocab.json` : Tokenizer vocabulary
|
| 86 |
+
|
| 87 |
+
## π¦ Usage
|
| 88 |
+
|
| 89 |
+
### 1. Load Models from Hugging Face
|
| 90 |
+
|
| 91 |
+
If your models are already uploaded to Hugging Face:
|
| 92 |
+
|
| 93 |
+
```python
|
| 94 |
+
from example_usage import load_models_from_hf
|
| 95 |
+
|
| 96 |
+
# Load all models
|
| 97 |
+
models = load_models_from_hf("your-username/your-model")
|
| 98 |
+
|
| 99 |
+
color_model = models['color_model']
|
| 100 |
+
hierarchy_model = models['hierarchy_model']
|
| 101 |
+
main_model = models['main_model']
|
| 102 |
+
processor = models['processor']
|
| 103 |
+
device = models['device']
|
| 104 |
+
```
|
| 105 |
+
|
| 106 |
+
### 2. Text Search
|
| 107 |
+
|
| 108 |
+
```python
|
| 109 |
+
import torch
|
| 110 |
+
from transformers import CLIPProcessor
|
| 111 |
+
|
| 112 |
+
# Prepare text query
|
| 113 |
+
text_query = "red dress"
|
| 114 |
+
text_inputs = processor(text=[text_query], padding=True, return_tensors="pt")
|
| 115 |
+
text_inputs = {k: v.to(device) for k, v in text_inputs.items()}
|
| 116 |
+
|
| 117 |
+
# Get main model embeddings
|
| 118 |
+
with torch.no_grad():
|
| 119 |
+
outputs = main_model(**text_inputs)
|
| 120 |
+
text_features = outputs.text_embeds
|
| 121 |
+
|
| 122 |
+
# Get specialized embeddings
|
| 123 |
+
color_emb = color_model.get_text_embeddings([text_query])
|
| 124 |
+
hierarchy_emb = hierarchy_model.get_text_embeddings([text_query])
|
| 125 |
+
```
|
| 126 |
+
|
| 127 |
+
### 3. Image Search
|
| 128 |
+
|
| 129 |
+
```python
|
| 130 |
+
from PIL import Image
|
| 131 |
+
|
| 132 |
+
# Load image
|
| 133 |
+
image = Image.open("path/to/image.jpg").convert("RGB")
|
| 134 |
+
image_inputs = processor(images=[image], return_tensors="pt")
|
| 135 |
+
image_inputs = {k: v.to(device) for k, v in image_inputs.items()}
|
| 136 |
+
|
| 137 |
+
# Get embeddings
|
| 138 |
+
with torch.no_grad():
|
| 139 |
+
outputs = main_model(**image_inputs)
|
| 140 |
+
image_features = outputs.image_embeds
|
| 141 |
+
```
|
| 142 |
+
|
| 143 |
+
### 4. Using the Example Script
|
| 144 |
+
|
| 145 |
+
```bash
|
| 146 |
+
python example_usage.py \
|
| 147 |
+
--repo-id your-username/your-model \
|
| 148 |
+
--text "blue jacket" \
|
| 149 |
+
--image path/to/image.jpg
|
| 150 |
+
```
|
| 151 |
+
|
| 152 |
+
## π― Model Training
|
| 153 |
+
|
| 154 |
+
### Train the Color Model
|
| 155 |
+
|
| 156 |
+
```python
|
| 157 |
+
from color_model import ColorCLIP, train_color_model
|
| 158 |
+
|
| 159 |
+
# Configuration
|
| 160 |
+
model = ColorCLIP(vocab_size=10000, embedding_dim=16)
|
| 161 |
+
# ... dataset configuration ...
|
| 162 |
+
|
| 163 |
+
# Training
|
| 164 |
+
train_color_model(model, train_loader, val_loader, num_epochs=20)
|
| 165 |
+
```
|
| 166 |
+
|
| 167 |
+
### Train the Hierarchy Model
|
| 168 |
+
|
| 169 |
+
```python
|
| 170 |
+
from hierarchy_model import Model as HierarchyModel, train_hierarchy_model
|
| 171 |
+
|
| 172 |
+
# Configuration
|
| 173 |
+
model = HierarchyModel(num_hierarchy_classes=10, embed_dim=64)
|
| 174 |
+
# ... dataset configuration ...
|
| 175 |
+
|
| 176 |
+
# Training
|
| 177 |
+
train_hierarchy_model(model, train_loader, val_loader, num_epochs=20)
|
| 178 |
+
```
|
| 179 |
+
|
| 180 |
+
### Train the Main CLIP Model
|
| 181 |
+
|
| 182 |
+
The main model trains with both specialized models:
|
| 183 |
+
|
| 184 |
+
```bash
|
| 185 |
+
python main_model.py
|
| 186 |
+
```
|
| 187 |
+
|
| 188 |
+
**Training Parameters** (in `main_model.py`):
|
| 189 |
+
- `num_epochs = 20` : Number of epochs
|
| 190 |
+
- `learning_rate = 1e-5` : Learning rate
|
| 191 |
+
- `temperature = 0.07` : Temperature for contrastive loss
|
| 192 |
+
- `alignment_weight = 0.5` : Weight for embedding alignment
|
| 193 |
+
- `batch_size = 32` : Batch size
|
| 194 |
+
- `use_enhanced_loss = True` : Use enhanced loss with alignment
|
| 195 |
+
|
| 196 |
+
**Features**:
|
| 197 |
+
- Triple contrastive loss (text-image-attributes)
|
| 198 |
+
- Direct alignment between specialized models and main model
|
| 199 |
+
- Early stopping with patience
|
| 200 |
+
- Automatic learning rate reduction
|
| 201 |
+
- Automatic best model saving
|
| 202 |
+
|
| 203 |
+
|
| 204 |
+
## π Models
|
| 205 |
+
|
| 206 |
+
### Color Model
|
| 207 |
+
|
| 208 |
+
- **Architecture** : ResNet18 (image encoder) + Embedding (text encoder)
|
| 209 |
+
- **Embedding dimension** : 16
|
| 210 |
+
- **Trained on** : Fashion data with color annotations
|
| 211 |
+
- **Usage** : Extract color embeddings from text or images
|
| 212 |
+
|
| 213 |
+
### Hierarchy Model
|
| 214 |
+
|
| 215 |
+
- **Architecture** : ResNet18 (image encoder) + Embedding (hierarchy encoder)
|
| 216 |
+
- **Embedding dimension** : 64
|
| 217 |
+
- **Hierarchy classes** : shirt, dress, pant, shoe, bag, etc.
|
| 218 |
+
- **Usage** : Classify and encode categorical hierarchy
|
| 219 |
+
|
| 220 |
+
### Main CLIP Model
|
| 221 |
+
|
| 222 |
+
- **Architecture** : CLIP ViT-B/32 (LAION)
|
| 223 |
+
- **Base** : `laion/CLIP-ViT-B-32-laion2B-s34B-b79K`
|
| 224 |
+
- **Trained with** : Triple contrastive loss including color and hierarchy embeddings
|
| 225 |
+
- **Dimensions** : 592 (512 CLIP + 16 color + 64 hierarchy)
|
| 226 |
+
- **Features** :
|
| 227 |
+
- Text-image search
|
| 228 |
+
- Specialized embeddings (color + hierarchy)
|
| 229 |
+
- Alignment with specialized models
|
| 230 |
+
|
| 231 |
+
## π Advanced Usage Examples
|
| 232 |
+
|
| 233 |
+
### Search with Combined Embeddings
|
| 234 |
+
|
| 235 |
+
```python
|
| 236 |
+
import torch.nn.functional as F
|
| 237 |
+
|
| 238 |
+
# Text query
|
| 239 |
+
text_query = "red dress"
|
| 240 |
+
text_inputs = processor(text=[text_query], padding=True, return_tensors="pt")
|
| 241 |
+
text_inputs = {k: v.to(device) for k, v in text_inputs.items()}
|
| 242 |
+
|
| 243 |
+
# Main model embeddings
|
| 244 |
+
with torch.no_grad():
|
| 245 |
+
outputs = main_model(**text_inputs)
|
| 246 |
+
text_features = outputs.text_embeds
|
| 247 |
+
|
| 248 |
+
# Extract specialized embeddings from main model
|
| 249 |
+
main_color_emb = text_features[:, :16] # First 16 dimensions
|
| 250 |
+
main_hierarchy_emb = text_features[:, 16:80] # Dimensions 16-80
|
| 251 |
+
main_clip_emb = text_features[:, 80:] # CLIP dimensions (80+)
|
| 252 |
+
|
| 253 |
+
# Compare with specialized models
|
| 254 |
+
color_emb = color_model.get_text_embeddings([text_query])
|
| 255 |
+
hierarchy_emb = hierarchy_model.get_text_embeddings([text_query])
|
| 256 |
+
|
| 257 |
+
# Cosine similarity
|
| 258 |
+
color_similarity = F.cosine_similarity(color_emb, main_color_emb, dim=1)
|
| 259 |
+
hierarchy_similarity = F.cosine_similarity(hierarchy_emb, main_hierarchy_emb, dim=1)
|
| 260 |
+
```
|
| 261 |
+
|
| 262 |
+
### Search in an Image Database
|
| 263 |
+
|
| 264 |
+
```python
|
| 265 |
+
import numpy as np
|
| 266 |
+
|
| 267 |
+
# Load all images from database
|
| 268 |
+
image_paths = [...] # List of image paths
|
| 269 |
+
image_features_list = []
|
| 270 |
+
|
| 271 |
+
for img_path in image_paths:
|
| 272 |
+
image = Image.open(img_path).convert("RGB")
|
| 273 |
+
image_inputs = processor(images=[image], return_tensors="pt")
|
| 274 |
+
image_inputs = {k: v.to(device) for k, v in image_inputs.items()}
|
| 275 |
+
|
| 276 |
+
with torch.no_grad():
|
| 277 |
+
outputs = main_model(**image_inputs)
|
| 278 |
+
features = outputs.image_embeds
|
| 279 |
+
image_features_list.append(features.cpu().numpy())
|
| 280 |
+
|
| 281 |
+
# Convert to numpy array
|
| 282 |
+
image_features = np.vstack(image_features_list)
|
| 283 |
+
|
| 284 |
+
# Search
|
| 285 |
+
query = "red dress"
|
| 286 |
+
# ... get text_features ...
|
| 287 |
+
|
| 288 |
+
# Calculate similarities
|
| 289 |
+
similarities = F.cosine_similarity(
|
| 290 |
+
text_features,
|
| 291 |
+
torch.from_numpy(image_features).to(device),
|
| 292 |
+
dim=1
|
| 293 |
+
)
|
| 294 |
+
|
| 295 |
+
# Sort by similarity
|
| 296 |
+
top_k = 10
|
| 297 |
+
top_indices = similarities.argsort(descending=True)[:top_k]
|
| 298 |
+
```
|
| 299 |
+
|
| 300 |
+
## π Evaluation
|
| 301 |
+
|
| 302 |
+
Evaluation scripts are available in `Evaluation/`:
|
| 303 |
+
|
| 304 |
+
- `evaluate_color_embeddings.py` : Color embeddings evaluation
|
| 305 |
+
- `main_model_evaluation.py` : Main model evaluation
|
| 306 |
+
- `fashion_search.py` : Fashion search tests
|
| 307 |
+
|
| 308 |
+
|
| 309 |
+
## π Citation
|
| 310 |
+
|
| 311 |
+
If you use these models, please cite:
|
| 312 |
+
|
| 313 |
+
```bibtex
|
| 314 |
+
@misc{fashion-search-model,
|
| 315 |
+
title={GAP (Guaranteed Attribute Position) CLIP: A Multi-Loss Framework for Attribute-Aware Fashion Embeddings},
|
| 316 |
+
author={ },
|
| 317 |
+
year={2024},
|
| 318 |
+
howpublished={\url{https://huggingface.co/Leacb4/gap-clip}}
|
| 319 |
+
}
|
| 320 |
+
```
|
| 321 |
+
|
| 322 |
+
## π€ Contributing
|
| 323 |
+
|
| 324 |
+
Contributions are welcome! Feel free to open an issue or a pull request.
|
| 325 |
+
|
| 326 |
+
## π§ Contact
|
| 327 |
+
|
| 328 |
+
Lea Sarfati lea.attia@gmail.com
|
| 329 |
+
|
| 330 |
+
---
|
| 331 |
+
|
| 332 |
+
**Note** : This project is under active development. For any questions or issues, please open an issue on the repository.
|