Spaces:
Sleeping
Sleeping
| # Troubleshooting Guide | |
| ## Common Issues and Solutions | |
| ### π HuggingFace API Update (November 2025) | |
| **Important:** HuggingFace has updated their Inference API endpoint. | |
| **If you see 404 errors:** | |
| - β Make sure you're using the latest version of `llm_backend.py` | |
| - β The new endpoint is: `https://router.huggingface.co/hf-inference/` | |
| - β Old endpoint (`api-inference.huggingface.co`) is deprecated | |
| **Already updated:** This version uses the new Inference Providers API automatically. | |
| --- | |
| ### β "LLM backend not configured" Error | |
| **Symptom:** App shows warning banner, features don't work | |
| **Cause:** No LLM provider credentials found | |
| **Solutions:** | |
| #### On HuggingFace Spaces | |
| **Option 1: Make Space Public (Easiest)** | |
| 1. Go to your Space Settings | |
| 2. Change Visibility: Private β **Public** | |
| 3. Refresh/Restart the Space | |
| 4. HF_TOKEN will automatically be available | |
| **Option 2: Add Token Manually (for Private Spaces)** | |
| 1. Get your token: https://huggingface.co/settings/tokens | |
| 2. Create token with "Read" permission (if you don't have one) | |
| 3. Go to Space Settings β Variables | |
| 4. Add variable: | |
| - Name: `HUGGINGFACE_API_KEY` | |
| - Value: your_token_here | |
| 5. Restart the Space | |
| **Option 3: Use Premium Provider** | |
| 1. Get API key from OpenAI or Anthropic | |
| 2. Go to Space Settings β Variables | |
| 3. Add: | |
| - `LLM_PROVIDER=openai` | |
| - `OPENAI_API_KEY=sk-your-key` | |
| 4. Restart Space | |
| #### Running Locally | |
| **Check your environment:** | |
| ```bash | |
| python check_env.py | |
| ``` | |
| **Set credentials:** | |
| ```bash | |
| # For OpenAI (recommended) | |
| export OPENAI_API_KEY="sk-your-key-here" | |
| # OR for Anthropic | |
| export ANTHROPIC_API_KEY="your-key-here" | |
| # OR for HuggingFace | |
| export HUGGINGFACE_API_KEY="your-token-here" | |
| # Then run | |
| python app.py | |
| ``` | |
| --- | |
| ### β±οΈ Generation Takes Too Long | |
| **Symptom:** Waiting 30+ seconds for survey generation | |
| **Cause:** HuggingFace free tier can be slow, especially during high usage | |
| **Solutions:** | |
| 1. **Be patient on first request** - Models "cold start", can take 60+ seconds | |
| 2. **Try again** - You might hit a faster server | |
| 3. **Simplify requests:** | |
| - Use shorter outlines (2-3 sentences) | |
| - Request fewer questions (5-10) | |
| - Translate one language at a time | |
| 4. **Upgrade to paid provider:** | |
| ```bash | |
| LLM_PROVIDER=openai | |
| OPENAI_API_KEY=sk-your-key | |
| ``` | |
| Much faster, costs ~$0.01-0.05 per survey | |
| --- | |
| ### π΄ "AttributeError: 'NoneType' object has no attribute..." | |
| **Symptom:** App crashes on startup | |
| **Cause:** Interface trying to use backend before it's initialized | |
| **Solution:** Update to latest version - this has been fixed | |
| **If still happening:** | |
| 1. Check `app.py` has the latest code | |
| 2. Verify `get_language_choices()` doesn't depend on `survey_trans` instance | |
| 3. Make sure all functions check `if survey_gen:` before using it | |
| --- | |
| ### π Analysis Returns Empty/Poor Results | |
| **Symptom:** Analysis doesn't find themes or gives generic insights | |
| **Causes & Solutions:** | |
| 1. **Too few responses:** | |
| - Need 10-20+ responses for meaningful analysis | |
| - Add more sample data | |
| 2. **Low-quality model:** | |
| - Free HF models may struggle with complex analysis | |
| - Upgrade to OpenAI GPT-4 or Claude for better analysis | |
| 3. **Responses too short:** | |
| - Analysis works best with detailed, paragraph-length responses | |
| - Encourage respondents to elaborate | |
| 4. **Wrong format:** | |
| - Make sure responses are properly formatted JSON | |
| - Use the "Load Example" button to see correct format | |
| --- | |
| ### π Translation Failed | |
| **Symptom:** Translation returns error or garbled text | |
| **Causes & Solutions:** | |
| 1. **No survey generated:** | |
| - Generate a survey first before translating | |
| - Or upload a valid survey JSON | |
| 2. **Model limitations:** | |
| - Free HF models may struggle with some languages | |
| - Use OpenAI or Anthropic for better translations | |
| - Try more common languages (Spanish, French) first | |
| 3. **Rate limit:** | |
| - Wait a few minutes | |
| - Translate fewer languages at once | |
| --- | |
| ### πΎ Downloads Don't Work | |
| **Symptom:** Can't download survey/analysis files | |
| **Cause:** Browser/Gradio issues | |
| **Solutions:** | |
| 1. **Check browser console** for errors (F12) | |
| 2. **Try different browser** (Chrome, Firefox) | |
| 3. **Copy-paste data** manually from the output box | |
| 4. **Check Gradio version** - update to latest: | |
| ```bash | |
| pip install --upgrade gradio | |
| ``` | |
| --- | |
| ### π Private Space Issues | |
| **Symptom:** Works in public Space but not private | |
| **Cause:** HF_TOKEN not available in private Spaces | |
| **Solution:** | |
| Add token manually: | |
| 1. https://huggingface.co/settings/tokens | |
| 2. Copy token with "Read" permission | |
| 3. Space Settings β Variables β Add `HUGGINGFACE_API_KEY` | |
| 4. Restart | |
| --- | |
| ### π« Rate Limit Errors | |
| **Symptom:** "Rate limit exceeded" or 429 errors | |
| **HuggingFace Free Tier:** | |
| - Wait 5-10 minutes | |
| - Reduce request frequency | |
| - Upgrade to Pro: https://huggingface.co/pricing | |
| **OpenAI:** | |
| - Check usage: https://platform.openai.com/usage | |
| - Add credits or wait for reset | |
| - Use GPT-3.5 instead of GPT-4 (cheaper) | |
| **Anthropic:** | |
| - Check console: https://console.anthropic.com | |
| - Add credits or upgrade plan | |
| --- | |
| ### π± Mobile/Responsive Issues | |
| **Symptom:** UI doesn't work well on mobile | |
| **Cause:** Gradio has some mobile limitations | |
| **Solutions:** | |
| - Use landscape orientation | |
| - Use tablet or desktop for best experience | |
| - Some features may be limited on mobile | |
| --- | |
| ## Debugging Steps | |
| ### 1. Check Environment | |
| ```bash | |
| python check_env.py | |
| ``` | |
| Look for: | |
| - β At least one API key is SET | |
| - β All dependencies installed | |
| - β Provider will be detected | |
| ### 2. Check Logs | |
| **On HuggingFace Spaces:** | |
| - Go to "Logs" tab | |
| - Look for "=== LLM Backend Initialization ===" | |
| - Check which credentials are found | |
| **Running locally:** | |
| - Check terminal output | |
| - Look for initialization messages | |
| - Check for errors/warnings | |
| ### 3. Test Backend | |
| ```bash | |
| python test_hf_backend.py | |
| ``` | |
| Should show: | |
| - β Backend initialized | |
| - β Generation successful | |
| - β ALL TESTS PASSED | |
| ### 4. Start Simple | |
| If issues persist: | |
| 1. **Test with OpenAI first** (most reliable) | |
| 2. **Use example data** from the UI | |
| 3. **Try one feature at a time** (generation only) | |
| 4. **Check network connectivity** | |
| --- | |
| ## Getting Help | |
| ### Before Asking for Help | |
| 1. β Run `python check_env.py` | |
| 2. β Check logs for error messages | |
| 3. β Try with example data | |
| 4. β Update to latest code | |
| 5. β Read this troubleshooting guide | |
| ### Where to Get Help | |
| 1. **Check documentation:** | |
| - `README.md` - Overview | |
| - `USAGE_GUIDE.md` - How to use | |
| - `DEPLOYMENT.md` - Setup instructions | |
| - `QUICK_START_HF_SPACES.md` - Fast deployment | |
| 2. **Common resources:** | |
| - Gradio docs: https://gradio.app/docs | |
| - HuggingFace docs: https://huggingface.co/docs | |
| - OpenAI docs: https://platform.openai.com/docs | |
| 3. **Report issues:** | |
| - Include output from `check_env.py` | |
| - Include relevant logs | |
| - Describe what you tried | |
| --- | |
| ## Performance Tips | |
| ### For Best Results | |
| **Model Selection:** | |
| - **Best Quality:** OpenAI GPT-4 or Anthropic Claude 3.5 Sonnet | |
| - **Best Value:** OpenAI GPT-4o-mini | |
| - **Free:** HuggingFace (slower, lower quality) | |
| **Request Optimization:** | |
| - Keep outlines concise (2-4 sentences) | |
| - Request 10-15 questions (not 25) | |
| - Translate common languages first | |
| - Provide 20+ responses for analysis | |
| **Cost Control:** | |
| - Use GPT-4o-mini instead of GPT-4 ($0.15 vs $5 per 1M tokens) | |
| - Cache common surveys | |
| - Batch operations when possible | |
| - Monitor usage dashboards | |
| --- | |
| ## Emergency Fixes | |
| ### App Won't Start At All | |
| ```bash | |
| # 1. Clean install | |
| rm -rf venv | |
| python -m venv venv | |
| source venv/bin/activate # or venv\Scripts\activate on Windows | |
| pip install -r requirements.txt | |
| # 2. Check environment | |
| python check_env.py | |
| # 3. Try with minimal config | |
| export OPENAI_API_KEY="sk-your-key" | |
| python app.py | |
| ``` | |
| ### App Starts But Nothing Works | |
| ```bash | |
| # 1. Verify backend | |
| python test_hf_backend.py | |
| # 2. Check imports | |
| python -c "from llm_backend import LLMBackend; print('OK')" | |
| python -c "from survey_generator import SurveyGenerator; print('OK')" | |
| # 3. Test manually | |
| python | |
| >>> from llm_backend import LLMBackend, LLMProvider | |
| >>> backend = LLMBackend(provider=LLMProvider.OPENAI) | |
| >>> backend.generate([{"role": "user", "content": "Hello"}], max_tokens=10) | |
| ``` | |
| --- | |
| **Still stuck?** Make sure you're using the latest version of all files and have followed the setup instructions carefully. | |