--- title: Dressify - Production-Ready Outfit Recommendation emoji: ๐Ÿ† colorFrom: purple colorTo: green sdk: gradio sdk_version: "5.44.1" app_file: app.py pinned: false --- # Dressify - Production-Ready Outfit Recommendation System A **research-grade, self-contained** outfit recommendation service that automatically downloads the Polyvore dataset, trains state-of-the-art models, and provides a sophisticated Gradio interface for wardrobe uploads and outfit generation. ## ๐Ÿš€ Features - **Self-Contained**: No external dependencies or environment variables needed - **Auto-Dataset Preparation**: Downloads and processes Stylique/Polyvore dataset automatically - **Research-Grade Models**: ResNet50 item embedder + ViT outfit compatibility encoder - **Advanced Training**: Triplet loss with semi-hard negative mining, mixed precision - **Production UI**: Gradio interface with wardrobe upload, outfit preview, and JSON export - **REST API**: FastAPI endpoints for embedding and composition - **Auto-Bootstrap**: Background training and model reloading ## ๐Ÿ—๏ธ Architecture ### Data Pipeline 1. **Dataset Download**: Automatically fetches Stylique/Polyvore from HF Hub 2. **Image Processing**: Unzips images.zip and organizes into structured format 3. **Split Generation**: Creates train/val/test splits (70/15/15) with deterministic RNG 4. **Triplet Mining**: Generates item triplets and outfit triplets for training ### Model Architecture - **Item Embedder**: ResNet50 + projection head โ†’ 512D L2-normalized embeddings - **Outfit Encoder**: Transformer encoder โ†’ outfit-level compatibility scoring - **Loss Functions**: Triplet margin loss with cosine distance and semi-hard mining ### Training Pipeline - Mixed precision training with channels-last memory format - Automatic checkpointing and best model saving - Validation metrics and early stopping - Background training with model reloading ## ๐Ÿš€ Quick Start ### 1. Deploy to Hugging Face Space ```bash # Upload this entire folder as a Space # The system will automatically: # - Download Polyvore dataset # - Prepare splits and triplets # - Train models (if no checkpoints exist) # - Launch Gradio UI + FastAPI ``` ### 2. Local Development ```bash # Clone and setup git clone cd recomendation pip install -r requirements.txt # Launch app (auto-downloads dataset) python app.py ``` ## ๐Ÿ“ Project Structure ``` recomendation/ โ”œโ”€โ”€ app.py # FastAPI + Gradio app (main entry) โ”œโ”€โ”€ inference.py # Inference service with model loading โ”œโ”€โ”€ models/ โ”‚ โ”œโ”€โ”€ resnet_embedder.py # ResNet50 + projection head โ”‚ โ””โ”€โ”€ vit_outfit.py # Transformer encoder for outfits โ”œโ”€โ”€ data/ โ”‚ โ””โ”€โ”€ polyvore.py # PyTorch datasets for training โ”œโ”€โ”€ scripts/ โ”‚ โ””โ”€โ”€ prepare_polyvore.py # Dataset preparation and splits โ”œโ”€โ”€ utils/ โ”‚ โ”œโ”€โ”€ data_fetch.py # HF dataset downloader โ”‚ โ”œโ”€โ”€ transforms.py # Image transforms โ”‚ โ””โ”€โ”€ export.py # Model export utilities โ”œโ”€โ”€ train_resnet.py # ResNet training script โ”œโ”€โ”€ train_vit_triplet.py # ViT triplet training script โ”œโ”€โ”€ requirements.txt # Dependencies โ”œโ”€โ”€ Dockerfile # Container deployment โ””โ”€โ”€ README.md # This file ``` ## ๐ŸŽฏ Model Performance ### Expected Metrics (Research-Grade) - **Item Embedder**: Triplet accuracy > 85%, validation loss < 0.1 - **Outfit Encoder**: Compatibility AUC > 0.8, precision > 0.75 - **Inference Speed**: < 100ms per outfit on GPU, < 500ms on CPU ### Training Time - **Item Embedder**: ~2-4 hours on L4 GPU (full dataset) - **Outfit Encoder**: ~1-2 hours on L4 GPU (with precomputed embeddings) ## ๐ŸŽจ Gradio Interface ### Features - **Wardrobe Upload**: Multi-file drag & drop with previews - **Outfit Generation**: Top-N recommendations with compatibility scores - **Preview Stitching**: Visual outfit composition - **JSON Export**: Structured data for integration - **Training Monitor**: Real-time training progress and metrics - **Status Dashboard**: Bootstrap and training status ### Usage Flow 1. Upload wardrobe images (minimum 4 items recommended) 2. Set context (occasion, weather, style preferences) 3. Generate outfits (default: top-3) 4. View stitched previews and download JSON ## ๐Ÿ”Œ API Endpoints ### FastAPI Server ```bash # Health check GET /health # Image embedding POST /embed { "images": ["base64_image_1", "base64_image_2"] } # Outfit composition POST /compose { "items": [ {"id": "item_1", "embedding": [0.1, 0.2, ...], "category": "upper"}, {"id": "item_2", "embedding": [0.3, 0.4, ...], "category": "bottom"} ], "context": {"occasion": "casual", "num_outfits": 3} } # Model artifacts GET /artifacts ``` ## ๐Ÿš€ Deployment ### Hugging Face Space 1. Upload this folder as a Space 2. Set Space type to "Gradio" 3. The system auto-bootstraps on first run 4. Models train automatically if no checkpoints exist 5. UI becomes available once training completes ### Docker ```bash # Build and run docker build -t dressify . docker run -p 7860:7860 -p 8000:8000 dressify # Access # Gradio: http://localhost:7860 # FastAPI: http://localhost:8000 ``` ## ๐Ÿ“ˆ Training & Evaluation ### Training Commands ```bash # Quick training (3 epochs each) # This runs automatically on Space startup # Manual training python train_resnet.py --data_root data/Polyvore --epochs 20 python train_vit_triplet.py --data_root data/Polyvore --epochs 30 ``` ### Evaluation Metrics - **Item Level**: Triplet accuracy, embedding quality, retrieval metrics - **Outfit Level**: Compatibility AUC, precision/recall, diversity scores - **System Level**: Inference latency, memory usage, throughput ## ๐Ÿ”ฌ Research Features ### Advanced Training - Semi-hard negative mining for better triplet selection - Mixed precision training with autocast - Channels-last memory format for CUDA optimization - Curriculum learning with difficulty progression ### Model Variants - **Standard**: ResNet50 + medium transformer (balanced) - **Research**: ResNet101 + large transformer (high performance) ## ๐Ÿค Integration ### Next.js + Supabase ```typescript // Upload wardrobe const uploadWardrobe = async (images: File[]) => { const formData = new FormData(); images.forEach(img => formData.append('images', img)); const response = await fetch('/api/wardrobe/upload', { method: 'POST', body: formData }); return response.json(); }; // Generate outfits const generateOutfits = async (wardrobe: WardrobeItem[]) => { const response = await fetch('/api/outfits/generate', { method: 'POST', headers: { 'Content-Type': 'application/json' }, body: JSON.stringify({ wardrobe, context: { occasion: 'casual' } }) }); return response.json(); }; ``` ### Database Schema ```sql -- User wardrobe table CREATE TABLE user_wardrobe ( id UUID PRIMARY KEY DEFAULT gen_random_uuid(), user_id UUID REFERENCES auth.users(id), image_url TEXT NOT NULL, category TEXT, embedding VECTOR(512), created_at TIMESTAMP DEFAULT NOW() ); -- Outfit recommendations CREATE TABLE outfit_recommendations ( id UUID PRIMARY KEY DEFAULT gen_random_uuid(), user_id UUID REFERENCES auth.users(id), outfit_items JSONB NOT NULL, compatibility_score FLOAT, context JSONB, created_at TIMESTAMP DEFAULT NOW() ); -- RLS policies ALTER TABLE user_wardrobe ENABLE ROW LEVEL SECURITY; ALTER TABLE outfit_recommendations ENABLE ROW LEVEL SECURITY; CREATE POLICY "Users can view own wardrobe" ON user_wardrobe FOR SELECT USING (auth.uid() = user_id); CREATE POLICY "Users can insert own wardrobe" ON user_wardrobe FOR INSERT WITH CHECK (auth.uid() = user_id); ``` ## ๐Ÿงช Testing ### Smoke Tests ```bash # Dataset preparation python scripts/prepare_polyvore.py --root data/Polyvore --random_split # Training loops python train_resnet.py --epochs 1 --batch_size 8 python train_vit_triplet.py --epochs 1 --batch_size 4 ``` ## ๐Ÿ“š References - **Dataset**: [Stylique/Polyvore](https://huggingface.co/datasets/Stylique/Polyvore) - **Reference Space**: [Stylique/recomendation](https://huggingface.co/spaces/Stylique/recomendation) - **Research Papers**: Triplet loss, transformer encoders, outfit compatibility ## ๐Ÿ“„ License MIT License - see LICENSE file for details. ## ๐Ÿค Contributing 1. Fork the repository 2. Create a feature branch 3. Make your changes 4. Add tests 5. Submit a pull request ## ๐Ÿ“ž Support - **Issues**: GitHub Issues - **Discussions**: GitHub Discussions - **Documentation**: This README + inline code comments --- **Built with โค๏ธ for the fashion AI community**