Spaces:
Sleeping
A newer version of the Gradio SDK is available:
6.2.0
Quick Start: Deploy to HuggingFace Spaces
β‘ Super Quick Start (5 Minutes)
1. Create Space
- Go to https://huggingface.co/spaces
- Click "Create new Space"
- Name:
conversai(or your choice) - SDK: Gradio
- Visibility: PUBLIC β οΈ (Required for auto HF_TOKEN)
- Click "Create Space"
β οΈ IMPORTANT: The Space MUST be PUBLIC for HF_TOKEN to work automatically!
2. Upload Files
Upload these files to your Space (drag and drop or use Git):
Required:
app.pyllm_backend.pysurvey_generator.pysurvey_translator.pydata_analyzer.pyexport_utils.pyrequirements.txtREADME.md
Optional (recommended):
USAGE_GUIDE.mdtest_hf_backend.py
3. Wait for Build
- Space will auto-build (2-3 minutes)
- Watch the "Logs" tab for progress
- When you see "Running on local URL", it's ready!
4. Test It!
That's it! No configuration needed. The app automatically uses HuggingFace's free Inference Providers API (updated Nov 2025).
Try these:
- Go to "Generate Survey" tab
- Enter: "I want to understand customer satisfaction with our product"
- Click "Generate Survey"
- Wait ~30 seconds for the first generation
π― What Works Out of the Box
β Survey Generation - Create professional surveys from outlines β Translation - Translate to 18+ languages β Data Analysis - Analyze survey responses with AI β Export - Download results as JSON
βοΈ Optional: Upgrade to Premium LLMs
For faster, better results, add environment variables in Space Settings:
OpenAI (Recommended)
LLM_PROVIDER=openai
OPENAI_API_KEY=sk-your-key-here
Anthropic Claude
LLM_PROVIDER=anthropic
ANTHROPIC_API_KEY=your-key-here
π Troubleshooting
"LLM backend not configured"
Cause: Your Space can't access HF_TOKEN.
Solutions:
Make Space PUBLIC (easiest):
- Go to Space Settings
- Change visibility from Private β Public
- Restart the Space
Add token manually (if Space must be private):
- Go to https://huggingface.co/settings/tokens
- Create/copy a token with "Read" permission
- Go to Space Settings β Variables
- Add:
HUGGINGFACE_API_KEY= your token - Restart the Space
Use different provider:
- Add
OPENAI_API_KEYin Variables - Or add
ANTHROPIC_API_KEY
- Add
"Generation takes too long"
Cause: Free HuggingFace Inference API can be slow during high usage.
Solutions:
- Wait longer - First request can take 30-60 seconds
- Try again - May hit a faster server
- Upgrade to OpenAI - Much faster (costs ~$0.01 per survey)
"Model returned error"
Common causes:
- Rate limit hit (wait a few minutes)
- Model is loading (first request after idle)
- Input too long (reduce outline length)
Solutions:
- Wait 2-3 minutes and retry
- Use shorter, simpler outlines
- Upgrade to paid provider (OpenAI/Anthropic)
π Performance Comparison
| Provider | Speed | Quality | Cost | Setup |
|---|---|---|---|---|
| HuggingFace (Free) | Slow | Good | Free | Auto |
| OpenAI | Fast | Excellent | ~$0.01-0.05/survey | Need API key |
| Anthropic | Fast | Excellent | ~$0.01-0.05/survey | Need API key |
π Testing Your Deployment
Run the test script:
- Add
test_hf_backend.pyto your Space - Open the Space's terminal (if available) or check logs
- Look for "β ALL TESTS PASSED!"
π‘ Pro Tips
First Request is Slow - HuggingFace models "cold start". First request may take 60+ seconds.
Keep It Simple - For free tier, use:
- Shorter outlines (2-3 sentences)
- Fewer questions (5-10)
- One language at a time
Monitor Usage - Check HuggingFace usage limits at https://huggingface.co/settings/billing
Upgrade When Ready - For production, switch to OpenAI:
LLM_PROVIDER=openai OPENAI_API_KEY=sk-your-key
π Next Steps
Once deployed:
- β Test all features - Generation, Translation, Analysis
- β Read USAGE_GUIDE.md - Learn best practices
- β Try examples - Use the built-in example data
- β Share your Space - Get feedback from users
- β Upgrade when ready - Add premium LLM for production
π Additional Resources
- Full Documentation: See
USAGE_GUIDE.md - Deployment Guide: See
DEPLOYMENT.md - HuggingFace Docs: https://huggingface.co/docs/hub/spaces
- Gradio Docs: https://gradio.app/docs
β FAQ
Q: Is it really free? A: Yes! HuggingFace provides free inference API access. Limits apply.
Q: Can I use my own model?
A: Yes! Set LLM_MODEL=your/model-name environment variable.
Q: How do I upgrade to OpenAI?
A: Add OPENAI_API_KEY in Space Settings, set LLM_PROVIDER=openai.
Q: Can I make it private?
A: Yes, but you may need to configure HF_TOKEN manually.
Q: How much does OpenAI cost? A: Typically $0.01-0.05 per survey with GPT-4o-mini.
Happy researching! π¬ Need help? Check USAGE_GUIDE.md or open an issue.