| # 🚨 Embedding Troubleshooting Quick Start | |
| ## Common Error Messages & Instant Fixes | |
| ### ⚠️ "shapes (768,) and (384,) not aligned" | |
| **What it means:** Your query embeddings (768D) don't match stored embeddings (384D) | |
| **Instant fix:** | |
| ```bash | |
| # Open .env file and change: | |
| EMBEDDING_PROVIDER=huggingface | |
| HUGGINGFACE_EMBEDDING_MODEL=sentence-transformers/all-MiniLM-L6-v2 | |
| # Restart your application | |
| ``` | |
| ### ⚠️ "shapes (384,) and (768,) not aligned" | |
| **What it means:** Your query embeddings (384D) don't match stored embeddings (768D) | |
| **Instant fix:** | |
| ```bash | |
| # Open .env file and change: | |
| EMBEDDING_PROVIDER=ollama | |
| OLLAMA_EMBEDDING_MODEL=nomic-embed-text:v1.5 | |
| # Make sure Ollama is running: ollama serve | |
| # Pull the model: ollama pull nomic-embed-text:v1.5 | |
| # Restart your application | |
| ``` | |
| ### ⚠️ "shapes (1536,) and (384,) not aligned" | |
| **What it means:** Your query embeddings (1536D) don't match stored embeddings (384D) | |
| **Instant fix:** | |
| ```bash | |
| # Open .env file and change: | |
| EMBEDDING_PROVIDER=huggingface | |
| HUGGINGFACE_EMBEDDING_MODEL=sentence-transformers/all-MiniLM-L6-v2 | |
| # Restart your application | |
| ``` | |
| ## 🔧 5-Minute Fix Guide | |
| ### Step 1: Identify Your Error (30 seconds) | |
| Look at your error message and find the dimension numbers: | |
| - `shapes (X,) and (Y,)` → X = query dimensions, Y = stored dimensions | |
| ### Step 2: Choose Matching Model (1 minute) | |
| | Stored Dimensions (Y) | Set in .env | | |
| |---------------------|-------------| | |
| | 384 | `EMBEDDING_PROVIDER=huggingface`<br/>`HUGGINGFACE_EMBEDDING_MODEL=sentence-transformers/all-MiniLM-L6-v2` | | |
| | 768 | `EMBEDDING_PROVIDER=ollama`<br/>`OLLAMA_EMBEDDING_MODEL=nomic-embed-text:v1.5` | | |
| | 1024 | `EMBEDDING_PROVIDER=ollama`<br/>`OLLAMA_EMBEDDING_MODEL=mxbai-embed-large` | | |
| | 1536 | `EMBEDDING_PROVIDER=openai`<br/>`OPENAI_EMBEDDING_MODEL=text-embedding-3-small` | | |
| ### Step 3: Update Configuration (2 minutes) | |
| ```bash | |
| # Edit your .env file | |
| nano .env # or use your preferred editor | |
| # Find the EMBEDDING_PROVIDER lines and update them | |
| # Save the file | |
| ``` | |
| ### Step 4: Restart Application (1 minute) | |
| ```bash | |
| # Kill current process (Ctrl+C) | |
| # Restart | |
| uvicorn app:app --reload | |
| ``` | |
| ### Step 5: Test (30 seconds) | |
| ```bash | |
| # Test with a simple query | |
| curl -X POST "http://localhost:8080/chat" \ | |
| -H "Content-Type: application/json" \ | |
| -d '{"message": "chicken recipe"}' | |
| ``` | |
| ## 🔍 Alternative: Start Fresh | |
| If you prefer to use a different embedding model permanently: | |
| ### Option A: Regenerate Database (5 minutes) | |
| ```bash | |
| # 1. Choose your preferred model in .env | |
| EMBEDDING_PROVIDER=ollama | |
| OLLAMA_EMBEDDING_MODEL=nomic-embed-text:v1.5 | |
| # 2. Enable database refresh | |
| DB_REFRESH_ON_START=true | |
| # 3. Restart application (this will rebuild everything) | |
| uvicorn app:app --reload | |
| # 4. IMPORTANT: Disable refresh after startup | |
| DB_REFRESH_ON_START=false | |
| ``` | |
| ### Option B: Switch Vector Store (2 minutes) | |
| ```bash | |
| # Switch to ChromaDB (will create fresh database) | |
| VECTOR_STORE_PROVIDER=chromadb | |
| # Restart application | |
| uvicorn app:app --reload | |
| ``` | |
| ## ⚡ Prevention Tips | |
| ### Document Your Choice | |
| Add a comment to your .env file: | |
| ```bash | |
| # Created 2025-08-27 with all-MiniLM-L6-v2 (384 dimensions) | |
| EMBEDDING_PROVIDER=huggingface | |
| HUGGINGFACE_EMBEDDING_MODEL=sentence-transformers/all-MiniLM-L6-v2 | |
| ``` | |
| ### Consistent Development | |
| If working in a team, ensure everyone uses the same configuration: | |
| ```bash | |
| # Share this in your team chat: | |
| # "Use EMBEDDING_PROVIDER=huggingface with all-MiniLM-L6-v2" | |
| ``` | |
| --- | |
| **Still stuck?** Check the full [Embedding Compatibility Guide](./embedding-compatibility-guide.md) for detailed explanations. | |