jmisak commited on
Commit
fd77f04
Β·
verified Β·
1 Parent(s): c327bd5

Upload 7 files

Browse files
Files changed (6) hide show
  1. QUICK_START_HF_SPACES.md +186 -0
  2. README.md +14 -1
  3. TROUBLESHOOTING.md +347 -0
  4. app.py +43 -9
  5. check_env.py +147 -0
  6. test_hf_backend.py +109 -0
QUICK_START_HF_SPACES.md ADDED
@@ -0,0 +1,186 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Quick Start: Deploy to HuggingFace Spaces
2
+
3
+ ## ⚑ Super Quick Start (5 Minutes)
4
+
5
+ ### 1. Create Space
6
+ 1. Go to https://huggingface.co/spaces
7
+ 2. Click **"Create new Space"**
8
+ 3. Name: `conversai` (or your choice)
9
+ 4. SDK: **Gradio**
10
+ 5. **Visibility: PUBLIC** ⚠️ (Required for auto HF_TOKEN)
11
+ 6. Click **"Create Space"**
12
+
13
+ **⚠️ IMPORTANT:** The Space MUST be PUBLIC for HF_TOKEN to work automatically!
14
+
15
+ ### 2. Upload Files
16
+
17
+ Upload these files to your Space (drag and drop or use Git):
18
+
19
+ **Required:**
20
+ - `app.py`
21
+ - `llm_backend.py`
22
+ - `survey_generator.py`
23
+ - `survey_translator.py`
24
+ - `data_analyzer.py`
25
+ - `export_utils.py`
26
+ - `requirements.txt`
27
+ - `README.md`
28
+
29
+ **Optional (recommended):**
30
+ - `USAGE_GUIDE.md`
31
+ - `test_hf_backend.py`
32
+
33
+ ### 3. Wait for Build
34
+
35
+ - Space will auto-build (2-3 minutes)
36
+ - Watch the "Logs" tab for progress
37
+ - When you see "Running on local URL", it's ready!
38
+
39
+ ### 4. Test It!
40
+
41
+ **That's it!** No configuration needed. The app automatically uses HuggingFace's free Inference API.
42
+
43
+ **Try these:**
44
+ 1. Go to "Generate Survey" tab
45
+ 2. Enter: "I want to understand customer satisfaction with our product"
46
+ 3. Click "Generate Survey"
47
+ 4. Wait ~30 seconds for the first generation
48
+
49
+ ## 🎯 What Works Out of the Box
50
+
51
+ βœ… **Survey Generation** - Create professional surveys from outlines
52
+ βœ… **Translation** - Translate to 18+ languages
53
+ βœ… **Data Analysis** - Analyze survey responses with AI
54
+ βœ… **Export** - Download results as JSON
55
+
56
+ ## βš™οΈ Optional: Upgrade to Premium LLMs
57
+
58
+ For faster, better results, add environment variables in Space Settings:
59
+
60
+ ### OpenAI (Recommended)
61
+ ```
62
+ LLM_PROVIDER=openai
63
+ OPENAI_API_KEY=sk-your-key-here
64
+ ```
65
+
66
+ ### Anthropic Claude
67
+ ```
68
+ LLM_PROVIDER=anthropic
69
+ ANTHROPIC_API_KEY=your-key-here
70
+ ```
71
+
72
+ ## πŸ› Troubleshooting
73
+
74
+ ### "LLM backend not configured"
75
+
76
+ **Cause:** Your Space can't access `HF_TOKEN`.
77
+
78
+ **Solutions:**
79
+
80
+ 1. **Make Space PUBLIC** (easiest):
81
+ - Go to Space Settings
82
+ - Change visibility from Private β†’ Public
83
+ - Restart the Space
84
+
85
+ 2. **Add token manually** (if Space must be private):
86
+ - Go to https://huggingface.co/settings/tokens
87
+ - Create/copy a token with "Read" permission
88
+ - Go to Space Settings β†’ Variables
89
+ - Add: `HUGGINGFACE_API_KEY` = your token
90
+ - Restart the Space
91
+
92
+ 3. **Use different provider**:
93
+ - Add `OPENAI_API_KEY` in Variables
94
+ - Or add `ANTHROPIC_API_KEY`
95
+
96
+ ### "Generation takes too long"
97
+
98
+ **Cause:** Free HuggingFace Inference API can be slow during high usage.
99
+
100
+ **Solutions:**
101
+ 1. **Wait longer** - First request can take 30-60 seconds
102
+ 2. **Try again** - May hit a faster server
103
+ 3. **Upgrade to OpenAI** - Much faster (costs ~$0.01 per survey)
104
+
105
+ ### "Model returned error"
106
+
107
+ **Common causes:**
108
+ - Rate limit hit (wait a few minutes)
109
+ - Model is loading (first request after idle)
110
+ - Input too long (reduce outline length)
111
+
112
+ **Solutions:**
113
+ - Wait 2-3 minutes and retry
114
+ - Use shorter, simpler outlines
115
+ - Upgrade to paid provider (OpenAI/Anthropic)
116
+
117
+ ## πŸ“Š Performance Comparison
118
+
119
+ | Provider | Speed | Quality | Cost | Setup |
120
+ |----------|-------|---------|------|-------|
121
+ | **HuggingFace (Free)** | Slow | Good | Free | Auto |
122
+ | **OpenAI** | Fast | Excellent | ~$0.01-0.05/survey | Need API key |
123
+ | **Anthropic** | Fast | Excellent | ~$0.01-0.05/survey | Need API key |
124
+
125
+ ## πŸ” Testing Your Deployment
126
+
127
+ Run the test script:
128
+
129
+ 1. Add `test_hf_backend.py` to your Space
130
+ 2. Open the Space's terminal (if available) or check logs
131
+ 3. Look for "βœ… ALL TESTS PASSED!"
132
+
133
+ ## πŸ’‘ Pro Tips
134
+
135
+ 1. **First Request is Slow** - HuggingFace models "cold start". First request may take 60+ seconds.
136
+
137
+ 2. **Keep It Simple** - For free tier, use:
138
+ - Shorter outlines (2-3 sentences)
139
+ - Fewer questions (5-10)
140
+ - One language at a time
141
+
142
+ 3. **Monitor Usage** - Check HuggingFace usage limits at https://huggingface.co/settings/billing
143
+
144
+ 4. **Upgrade When Ready** - For production, switch to OpenAI:
145
+ ```
146
+ LLM_PROVIDER=openai
147
+ OPENAI_API_KEY=sk-your-key
148
+ ```
149
+
150
+ ## πŸŽ“ Next Steps
151
+
152
+ Once deployed:
153
+
154
+ 1. βœ… **Test all features** - Generation, Translation, Analysis
155
+ 2. βœ… **Read USAGE_GUIDE.md** - Learn best practices
156
+ 3. βœ… **Try examples** - Use the built-in example data
157
+ 4. βœ… **Share your Space** - Get feedback from users
158
+ 5. βœ… **Upgrade when ready** - Add premium LLM for production
159
+
160
+ ## πŸ“š Additional Resources
161
+
162
+ - **Full Documentation**: See `USAGE_GUIDE.md`
163
+ - **Deployment Guide**: See `DEPLOYMENT.md`
164
+ - **HuggingFace Docs**: https://huggingface.co/docs/hub/spaces
165
+ - **Gradio Docs**: https://gradio.app/docs
166
+
167
+ ## ❓ FAQ
168
+
169
+ **Q: Is it really free?**
170
+ A: Yes! HuggingFace provides free inference API access. Limits apply.
171
+
172
+ **Q: Can I use my own model?**
173
+ A: Yes! Set `LLM_MODEL=your/model-name` environment variable.
174
+
175
+ **Q: How do I upgrade to OpenAI?**
176
+ A: Add `OPENAI_API_KEY` in Space Settings, set `LLM_PROVIDER=openai`.
177
+
178
+ **Q: Can I make it private?**
179
+ A: Yes, but you may need to configure `HF_TOKEN` manually.
180
+
181
+ **Q: How much does OpenAI cost?**
182
+ A: Typically $0.01-0.05 per survey with GPT-4o-mini.
183
+
184
+ ---
185
+
186
+ Happy researching! πŸ”¬ Need help? Check USAGE_GUIDE.md or open an issue.
README.md CHANGED
@@ -51,10 +51,17 @@ Battle the blank page, reach global audiences, and uncover insights with AI assi
51
 
52
  **No configuration needed!** The app automatically uses HuggingFace's Inference API.
53
 
54
- - Uses built-in `HF_TOKEN` (automatically available in Spaces)
55
  - Default model: `mistralai/Mixtral-8x7B-Instruct-v0.1`
56
  - Free tier available
57
 
 
 
 
 
 
 
 
58
  ### Optional: Use Other LLM Providers
59
 
60
  For better performance, you can configure alternative providers via environment variables:
@@ -82,7 +89,13 @@ The app automatically detects which provider to use based on available credentia
82
  ## πŸ“¦ Installation
83
 
84
  ```bash
 
85
  pip install -r requirements.txt
 
 
 
 
 
86
  python app.py
87
  ```
88
 
 
51
 
52
  **No configuration needed!** The app automatically uses HuggingFace's Inference API.
53
 
54
+ - Uses built-in `HF_TOKEN` (automatically available in **PUBLIC** Spaces)
55
  - Default model: `mistralai/Mixtral-8x7B-Instruct-v0.1`
56
  - Free tier available
57
 
58
+ **⚠️ Important:** Your Space must be **PUBLIC** for HF_TOKEN to be automatically available.
59
+
60
+ **If your Space is PRIVATE**, add `HUGGINGFACE_API_KEY` manually:
61
+ 1. Go to https://huggingface.co/settings/tokens
62
+ 2. Copy your token
63
+ 3. Add it in Space Settings β†’ Variables β†’ `HUGGINGFACE_API_KEY`
64
+
65
  ### Optional: Use Other LLM Providers
66
 
67
  For better performance, you can configure alternative providers via environment variables:
 
89
  ## πŸ“¦ Installation
90
 
91
  ```bash
92
+ # Install dependencies
93
  pip install -r requirements.txt
94
+
95
+ # Check environment setup (optional but recommended)
96
+ python check_env.py
97
+
98
+ # Run the app
99
  python app.py
100
  ```
101
 
TROUBLESHOOTING.md ADDED
@@ -0,0 +1,347 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Troubleshooting Guide
2
+
3
+ ## Common Issues and Solutions
4
+
5
+ ### ❌ "LLM backend not configured" Error
6
+
7
+ **Symptom:** App shows warning banner, features don't work
8
+
9
+ **Cause:** No LLM provider credentials found
10
+
11
+ **Solutions:**
12
+
13
+ #### On HuggingFace Spaces
14
+
15
+ **Option 1: Make Space Public (Easiest)**
16
+ 1. Go to your Space Settings
17
+ 2. Change Visibility: Private β†’ **Public**
18
+ 3. Refresh/Restart the Space
19
+ 4. HF_TOKEN will automatically be available
20
+
21
+ **Option 2: Add Token Manually (for Private Spaces)**
22
+ 1. Get your token: https://huggingface.co/settings/tokens
23
+ 2. Create token with "Read" permission (if you don't have one)
24
+ 3. Go to Space Settings β†’ Variables
25
+ 4. Add variable:
26
+ - Name: `HUGGINGFACE_API_KEY`
27
+ - Value: your_token_here
28
+ 5. Restart the Space
29
+
30
+ **Option 3: Use Premium Provider**
31
+ 1. Get API key from OpenAI or Anthropic
32
+ 2. Go to Space Settings β†’ Variables
33
+ 3. Add:
34
+ - `LLM_PROVIDER=openai`
35
+ - `OPENAI_API_KEY=sk-your-key`
36
+ 4. Restart Space
37
+
38
+ #### Running Locally
39
+
40
+ **Check your environment:**
41
+ ```bash
42
+ python check_env.py
43
+ ```
44
+
45
+ **Set credentials:**
46
+ ```bash
47
+ # For OpenAI (recommended)
48
+ export OPENAI_API_KEY="sk-your-key-here"
49
+
50
+ # OR for Anthropic
51
+ export ANTHROPIC_API_KEY="your-key-here"
52
+
53
+ # OR for HuggingFace
54
+ export HUGGINGFACE_API_KEY="your-token-here"
55
+
56
+ # Then run
57
+ python app.py
58
+ ```
59
+
60
+ ---
61
+
62
+ ### ⏱️ Generation Takes Too Long
63
+
64
+ **Symptom:** Waiting 30+ seconds for survey generation
65
+
66
+ **Cause:** HuggingFace free tier can be slow, especially during high usage
67
+
68
+ **Solutions:**
69
+
70
+ 1. **Be patient on first request** - Models "cold start", can take 60+ seconds
71
+ 2. **Try again** - You might hit a faster server
72
+ 3. **Simplify requests:**
73
+ - Use shorter outlines (2-3 sentences)
74
+ - Request fewer questions (5-10)
75
+ - Translate one language at a time
76
+ 4. **Upgrade to paid provider:**
77
+ ```bash
78
+ LLM_PROVIDER=openai
79
+ OPENAI_API_KEY=sk-your-key
80
+ ```
81
+ Much faster, costs ~$0.01-0.05 per survey
82
+
83
+ ---
84
+
85
+ ### πŸ”΄ "AttributeError: 'NoneType' object has no attribute..."
86
+
87
+ **Symptom:** App crashes on startup
88
+
89
+ **Cause:** Interface trying to use backend before it's initialized
90
+
91
+ **Solution:** Update to latest version - this has been fixed
92
+
93
+ **If still happening:**
94
+ 1. Check `app.py` has the latest code
95
+ 2. Verify `get_language_choices()` doesn't depend on `survey_trans` instance
96
+ 3. Make sure all functions check `if survey_gen:` before using it
97
+
98
+ ---
99
+
100
+ ### πŸ“Š Analysis Returns Empty/Poor Results
101
+
102
+ **Symptom:** Analysis doesn't find themes or gives generic insights
103
+
104
+ **Causes & Solutions:**
105
+
106
+ 1. **Too few responses:**
107
+ - Need 10-20+ responses for meaningful analysis
108
+ - Add more sample data
109
+
110
+ 2. **Low-quality model:**
111
+ - Free HF models may struggle with complex analysis
112
+ - Upgrade to OpenAI GPT-4 or Claude for better analysis
113
+
114
+ 3. **Responses too short:**
115
+ - Analysis works best with detailed, paragraph-length responses
116
+ - Encourage respondents to elaborate
117
+
118
+ 4. **Wrong format:**
119
+ - Make sure responses are properly formatted JSON
120
+ - Use the "Load Example" button to see correct format
121
+
122
+ ---
123
+
124
+ ### 🌍 Translation Failed
125
+
126
+ **Symptom:** Translation returns error or garbled text
127
+
128
+ **Causes & Solutions:**
129
+
130
+ 1. **No survey generated:**
131
+ - Generate a survey first before translating
132
+ - Or upload a valid survey JSON
133
+
134
+ 2. **Model limitations:**
135
+ - Free HF models may struggle with some languages
136
+ - Use OpenAI or Anthropic for better translations
137
+ - Try more common languages (Spanish, French) first
138
+
139
+ 3. **Rate limit:**
140
+ - Wait a few minutes
141
+ - Translate fewer languages at once
142
+
143
+ ---
144
+
145
+ ### πŸ’Ύ Downloads Don't Work
146
+
147
+ **Symptom:** Can't download survey/analysis files
148
+
149
+ **Cause:** Browser/Gradio issues
150
+
151
+ **Solutions:**
152
+
153
+ 1. **Check browser console** for errors (F12)
154
+ 2. **Try different browser** (Chrome, Firefox)
155
+ 3. **Copy-paste data** manually from the output box
156
+ 4. **Check Gradio version** - update to latest:
157
+ ```bash
158
+ pip install --upgrade gradio
159
+ ```
160
+
161
+ ---
162
+
163
+ ### πŸ”’ Private Space Issues
164
+
165
+ **Symptom:** Works in public Space but not private
166
+
167
+ **Cause:** HF_TOKEN not available in private Spaces
168
+
169
+ **Solution:**
170
+
171
+ Add token manually:
172
+ 1. https://huggingface.co/settings/tokens
173
+ 2. Copy token with "Read" permission
174
+ 3. Space Settings β†’ Variables β†’ Add `HUGGINGFACE_API_KEY`
175
+ 4. Restart
176
+
177
+ ---
178
+
179
+ ### 🚫 Rate Limit Errors
180
+
181
+ **Symptom:** "Rate limit exceeded" or 429 errors
182
+
183
+ **HuggingFace Free Tier:**
184
+ - Wait 5-10 minutes
185
+ - Reduce request frequency
186
+ - Upgrade to Pro: https://huggingface.co/pricing
187
+
188
+ **OpenAI:**
189
+ - Check usage: https://platform.openai.com/usage
190
+ - Add credits or wait for reset
191
+ - Use GPT-3.5 instead of GPT-4 (cheaper)
192
+
193
+ **Anthropic:**
194
+ - Check console: https://console.anthropic.com
195
+ - Add credits or upgrade plan
196
+
197
+ ---
198
+
199
+ ### πŸ“± Mobile/Responsive Issues
200
+
201
+ **Symptom:** UI doesn't work well on mobile
202
+
203
+ **Cause:** Gradio has some mobile limitations
204
+
205
+ **Solutions:**
206
+ - Use landscape orientation
207
+ - Use tablet or desktop for best experience
208
+ - Some features may be limited on mobile
209
+
210
+ ---
211
+
212
+ ## Debugging Steps
213
+
214
+ ### 1. Check Environment
215
+ ```bash
216
+ python check_env.py
217
+ ```
218
+
219
+ Look for:
220
+ - βœ… At least one API key is SET
221
+ - βœ… All dependencies installed
222
+ - βœ… Provider will be detected
223
+
224
+ ### 2. Check Logs
225
+
226
+ **On HuggingFace Spaces:**
227
+ - Go to "Logs" tab
228
+ - Look for "=== LLM Backend Initialization ==="
229
+ - Check which credentials are found
230
+
231
+ **Running locally:**
232
+ - Check terminal output
233
+ - Look for initialization messages
234
+ - Check for errors/warnings
235
+
236
+ ### 3. Test Backend
237
+ ```bash
238
+ python test_hf_backend.py
239
+ ```
240
+
241
+ Should show:
242
+ - βœ… Backend initialized
243
+ - βœ… Generation successful
244
+ - βœ… ALL TESTS PASSED
245
+
246
+ ### 4. Start Simple
247
+
248
+ If issues persist:
249
+ 1. **Test with OpenAI first** (most reliable)
250
+ 2. **Use example data** from the UI
251
+ 3. **Try one feature at a time** (generation only)
252
+ 4. **Check network connectivity**
253
+
254
+ ---
255
+
256
+ ## Getting Help
257
+
258
+ ### Before Asking for Help
259
+
260
+ 1. βœ… Run `python check_env.py`
261
+ 2. βœ… Check logs for error messages
262
+ 3. βœ… Try with example data
263
+ 4. βœ… Update to latest code
264
+ 5. βœ… Read this troubleshooting guide
265
+
266
+ ### Where to Get Help
267
+
268
+ 1. **Check documentation:**
269
+ - `README.md` - Overview
270
+ - `USAGE_GUIDE.md` - How to use
271
+ - `DEPLOYMENT.md` - Setup instructions
272
+ - `QUICK_START_HF_SPACES.md` - Fast deployment
273
+
274
+ 2. **Common resources:**
275
+ - Gradio docs: https://gradio.app/docs
276
+ - HuggingFace docs: https://huggingface.co/docs
277
+ - OpenAI docs: https://platform.openai.com/docs
278
+
279
+ 3. **Report issues:**
280
+ - Include output from `check_env.py`
281
+ - Include relevant logs
282
+ - Describe what you tried
283
+
284
+ ---
285
+
286
+ ## Performance Tips
287
+
288
+ ### For Best Results
289
+
290
+ **Model Selection:**
291
+ - **Best Quality:** OpenAI GPT-4 or Anthropic Claude 3.5 Sonnet
292
+ - **Best Value:** OpenAI GPT-4o-mini
293
+ - **Free:** HuggingFace (slower, lower quality)
294
+
295
+ **Request Optimization:**
296
+ - Keep outlines concise (2-4 sentences)
297
+ - Request 10-15 questions (not 25)
298
+ - Translate common languages first
299
+ - Provide 20+ responses for analysis
300
+
301
+ **Cost Control:**
302
+ - Use GPT-4o-mini instead of GPT-4 ($0.15 vs $5 per 1M tokens)
303
+ - Cache common surveys
304
+ - Batch operations when possible
305
+ - Monitor usage dashboards
306
+
307
+ ---
308
+
309
+ ## Emergency Fixes
310
+
311
+ ### App Won't Start At All
312
+
313
+ ```bash
314
+ # 1. Clean install
315
+ rm -rf venv
316
+ python -m venv venv
317
+ source venv/bin/activate # or venv\Scripts\activate on Windows
318
+ pip install -r requirements.txt
319
+
320
+ # 2. Check environment
321
+ python check_env.py
322
+
323
+ # 3. Try with minimal config
324
+ export OPENAI_API_KEY="sk-your-key"
325
+ python app.py
326
+ ```
327
+
328
+ ### App Starts But Nothing Works
329
+
330
+ ```bash
331
+ # 1. Verify backend
332
+ python test_hf_backend.py
333
+
334
+ # 2. Check imports
335
+ python -c "from llm_backend import LLMBackend; print('OK')"
336
+ python -c "from survey_generator import SurveyGenerator; print('OK')"
337
+
338
+ # 3. Test manually
339
+ python
340
+ >>> from llm_backend import LLMBackend, LLMProvider
341
+ >>> backend = LLMBackend(provider=LLMProvider.OPENAI)
342
+ >>> backend.generate([{"role": "user", "content": "Hello"}], max_tokens=10)
343
+ ```
344
+
345
+ ---
346
+
347
+ **Still stuck?** Make sure you're using the latest version of all files and have followed the setup instructions carefully.
app.py CHANGED
@@ -23,26 +23,39 @@ current_responses = []
23
  def initialize_backend():
24
  """Initialize LLM backend based on environment"""
25
  try:
 
 
 
 
 
 
 
 
26
  # Check for explicit provider setting
27
  provider_env = os.getenv("LLM_PROVIDER", "").lower()
28
 
29
  # Priority 1: Explicitly set provider
30
  if provider_env == "openai" and os.getenv("OPENAI_API_KEY"):
 
31
  return LLMBackend(provider=LLMProvider.OPENAI)
32
  elif provider_env == "anthropic" and os.getenv("ANTHROPIC_API_KEY"):
 
33
  return LLMBackend(provider=LLMProvider.ANTHROPIC)
34
  elif provider_env == "huggingface" and (os.getenv("HUGGINGFACE_API_KEY") or os.getenv("HF_TOKEN")):
35
  api_key = os.getenv("HUGGINGFACE_API_KEY") or os.getenv("HF_TOKEN")
 
36
  return LLMBackend(provider=LLMProvider.HUGGINGFACE, api_key=api_key)
37
  elif provider_env == "lm_studio":
 
38
  return LLMBackend(provider=LLMProvider.LM_STUDIO)
39
 
40
  # Priority 2: Auto-detect based on available credentials
41
  # HF_TOKEN is automatically available in HF Spaces, so check it first
42
- if os.getenv("HF_TOKEN") or os.getenv("HUGGINGFACE_API_KEY"):
43
- api_key = os.getenv("HUGGINGFACE_API_KEY") or os.getenv("HF_TOKEN")
44
  print(f"Auto-detected HuggingFace credentials, using HF Inference API")
45
- return LLMBackend(provider=LLMProvider.HUGGINGFACE, api_key=api_key)
 
46
  elif os.getenv("OPENAI_API_KEY"):
47
  print(f"Auto-detected OpenAI credentials")
48
  return LLMBackend(provider=LLMProvider.OPENAI)
@@ -51,8 +64,19 @@ def initialize_backend():
51
  return LLMBackend(provider=LLMProvider.ANTHROPIC)
52
  else:
53
  # No credentials found - return None to show error in UI
 
54
  print("WARNING: No LLM provider credentials found!")
55
- print("Please set one of: OPENAI_API_KEY, ANTHROPIC_API_KEY, HUGGINGFACE_API_KEY, or HF_TOKEN")
 
 
 
 
 
 
 
 
 
 
56
  return None
57
 
58
  except Exception as e:
@@ -226,7 +250,9 @@ def translate_current_survey(target_languages: List[str]):
226
 
227
  def get_language_choices():
228
  """Get language choices for dropdown"""
229
- langs = survey_trans.get_supported_languages()
 
 
230
  return [f"{code} - {name}" for code, name in langs.items()]
231
 
232
 
@@ -332,12 +358,20 @@ def create_interface():
332
  # Show backend status
333
  if llm_backend:
334
  status_msg = f"βœ… **Active LLM Provider:** {llm_backend.provider.value.upper()} | Model: {llm_backend.model}"
335
- status_color = "green"
336
  else:
337
- status_msg = "⚠️ **No LLM Provider Configured** - Please set API credentials (see About tab for instructions)"
338
- status_color = "orange"
 
 
 
 
 
 
 
 
339
 
340
- gr.Markdown(f'<div style="background-color: rgba(255, 165, 0, 0.1); padding: 10px; border-radius: 5px; margin: 10px 0;">{status_msg}</div>')
341
 
342
  with gr.Tabs() as tabs:
343
 
 
23
  def initialize_backend():
24
  """Initialize LLM backend based on environment"""
25
  try:
26
+ # Debug: Print all environment variables related to LLM
27
+ print("=== LLM Backend Initialization ===")
28
+ print(f"HF_TOKEN: {'SET' if os.getenv('HF_TOKEN') else 'NOT SET'}")
29
+ print(f"HUGGINGFACE_API_KEY: {'SET' if os.getenv('HUGGINGFACE_API_KEY') else 'NOT SET'}")
30
+ print(f"OPENAI_API_KEY: {'SET' if os.getenv('OPENAI_API_KEY') else 'NOT SET'}")
31
+ print(f"ANTHROPIC_API_KEY: {'SET' if os.getenv('ANTHROPIC_API_KEY') else 'NOT SET'}")
32
+ print(f"LLM_PROVIDER: {os.getenv('LLM_PROVIDER', 'NOT SET')}")
33
+
34
  # Check for explicit provider setting
35
  provider_env = os.getenv("LLM_PROVIDER", "").lower()
36
 
37
  # Priority 1: Explicitly set provider
38
  if provider_env == "openai" and os.getenv("OPENAI_API_KEY"):
39
+ print("Using OpenAI (explicit)")
40
  return LLMBackend(provider=LLMProvider.OPENAI)
41
  elif provider_env == "anthropic" and os.getenv("ANTHROPIC_API_KEY"):
42
+ print("Using Anthropic (explicit)")
43
  return LLMBackend(provider=LLMProvider.ANTHROPIC)
44
  elif provider_env == "huggingface" and (os.getenv("HUGGINGFACE_API_KEY") or os.getenv("HF_TOKEN")):
45
  api_key = os.getenv("HUGGINGFACE_API_KEY") or os.getenv("HF_TOKEN")
46
+ print("Using HuggingFace (explicit)")
47
  return LLMBackend(provider=LLMProvider.HUGGINGFACE, api_key=api_key)
48
  elif provider_env == "lm_studio":
49
+ print("Using LM Studio (explicit)")
50
  return LLMBackend(provider=LLMProvider.LM_STUDIO)
51
 
52
  # Priority 2: Auto-detect based on available credentials
53
  # HF_TOKEN is automatically available in HF Spaces, so check it first
54
+ hf_token = os.getenv("HF_TOKEN") or os.getenv("HUGGINGFACE_API_KEY")
55
+ if hf_token:
56
  print(f"Auto-detected HuggingFace credentials, using HF Inference API")
57
+ print(f"Token preview: {hf_token[:10]}...")
58
+ return LLMBackend(provider=LLMProvider.HUGGINGFACE, api_key=hf_token)
59
  elif os.getenv("OPENAI_API_KEY"):
60
  print(f"Auto-detected OpenAI credentials")
61
  return LLMBackend(provider=LLMProvider.OPENAI)
 
64
  return LLMBackend(provider=LLMProvider.ANTHROPIC)
65
  else:
66
  # No credentials found - return None to show error in UI
67
+ print("="*60)
68
  print("WARNING: No LLM provider credentials found!")
69
+ print("="*60)
70
+ print("For HuggingFace Spaces:")
71
+ print(" - HF_TOKEN should be automatically available")
72
+ print(" - Make sure your Space is PUBLIC")
73
+ print(" - Or add HUGGINGFACE_API_KEY in Settings")
74
+ print("")
75
+ print("For other providers, set one of:")
76
+ print(" - OPENAI_API_KEY")
77
+ print(" - ANTHROPIC_API_KEY")
78
+ print(" - HUGGINGFACE_API_KEY")
79
+ print("="*60)
80
  return None
81
 
82
  except Exception as e:
 
250
 
251
  def get_language_choices():
252
  """Get language choices for dropdown"""
253
+ # Get languages directly from SurveyTranslator class (static list)
254
+ from survey_translator import SurveyTranslator
255
+ langs = SurveyTranslator.SUPPORTED_LANGUAGES
256
  return [f"{code} - {name}" for code, name in langs.items()]
257
 
258
 
 
358
  # Show backend status
359
  if llm_backend:
360
  status_msg = f"βœ… **Active LLM Provider:** {llm_backend.provider.value.upper()} | Model: {llm_backend.model}"
361
+ bg_color = "rgba(0, 255, 0, 0.1)"
362
  else:
363
+ status_msg = """⚠️ **LLM Provider Not Configured**
364
+
365
+ **To use this app, you need to configure an LLM provider:**
366
+
367
+ 1. **Easiest (HuggingFace Spaces):** Make sure your Space is PUBLIC and HF_TOKEN will be auto-available
368
+ 2. **Best Quality:** Add `OPENAI_API_KEY` in Space Settings β†’ Variables
369
+ 3. **Alternative:** Add `ANTHROPIC_API_KEY` or `HUGGINGFACE_API_KEY`
370
+
371
+ See the **About** tab for detailed instructions."""
372
+ bg_color = "rgba(255, 165, 0, 0.2)"
373
 
374
+ gr.Markdown(f'<div style="background-color: {bg_color}; padding: 15px; border-radius: 5px; margin: 10px 0; border-left: 4px solid #FF6B6B;">{status_msg}</div>')
375
 
376
  with gr.Tabs() as tabs:
377
 
check_env.py ADDED
@@ -0,0 +1,147 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/usr/bin/env python3
2
+ """
3
+ Environment Check Script for ConversAI
4
+ Run this to diagnose configuration issues
5
+ """
6
+ import os
7
+ import sys
8
+
9
+
10
+ def check_environment():
11
+ """Check environment variables and configuration"""
12
+ print("="*70)
13
+ print("ConversAI Environment Check")
14
+ print("="*70)
15
+ print()
16
+
17
+ # Check Python version
18
+ print(f"Python Version: {sys.version}")
19
+ print()
20
+
21
+ # Check environment variables
22
+ print("Environment Variables:")
23
+ print("-" * 70)
24
+
25
+ env_vars = {
26
+ "HF_TOKEN": "HuggingFace Token (auto in PUBLIC Spaces)",
27
+ "HUGGINGFACE_API_KEY": "HuggingFace API Key (manual)",
28
+ "OPENAI_API_KEY": "OpenAI API Key",
29
+ "ANTHROPIC_API_KEY": "Anthropic API Key",
30
+ "LLM_PROVIDER": "Explicit LLM Provider",
31
+ "LLM_MODEL": "Custom Model Name",
32
+ "SPACE_ID": "HuggingFace Space ID (if in Space)"
33
+ }
34
+
35
+ has_any_key = False
36
+ for var, description in env_vars.items():
37
+ value = os.getenv(var)
38
+ if value:
39
+ has_any_key = True
40
+ # Show first 10 chars for security
41
+ preview = value[:10] + "..." if len(value) > 10 else value
42
+ print(f"βœ… {var:<25} SET ({preview})")
43
+ else:
44
+ print(f"❌ {var:<25} NOT SET")
45
+
46
+ print()
47
+ print("-" * 70)
48
+ print()
49
+
50
+ # Determine what will happen
51
+ print("Configuration Status:")
52
+ print("-" * 70)
53
+
54
+ if not has_any_key:
55
+ print("❌ NO CREDENTIALS FOUND")
56
+ print()
57
+ print("The app will NOT work without LLM credentials.")
58
+ print()
59
+ print("Solutions:")
60
+ print("1. If on HuggingFace Spaces:")
61
+ print(" β†’ Make sure Space is PUBLIC (Settings β†’ Visibility)")
62
+ print(" β†’ Or add HUGGINGFACE_API_KEY in Settings β†’ Variables")
63
+ print()
64
+ print("2. If running locally:")
65
+ print(" β†’ Set OPENAI_API_KEY environment variable")
66
+ print(" β†’ Or set ANTHROPIC_API_KEY")
67
+ print(" β†’ Or set HUGGINGFACE_API_KEY")
68
+ print()
69
+ return False
70
+
71
+ # Check which provider will be used
72
+ provider = os.getenv("LLM_PROVIDER", "").lower()
73
+
74
+ if provider == "openai" and os.getenv("OPENAI_API_KEY"):
75
+ print("βœ… Will use: OpenAI (explicit)")
76
+ elif provider == "anthropic" and os.getenv("ANTHROPIC_API_KEY"):
77
+ print("βœ… Will use: Anthropic Claude (explicit)")
78
+ elif provider == "huggingface" and (os.getenv("HUGGINGFACE_API_KEY") or os.getenv("HF_TOKEN")):
79
+ print("βœ… Will use: HuggingFace Inference API (explicit)")
80
+ elif os.getenv("HF_TOKEN") or os.getenv("HUGGINGFACE_API_KEY"):
81
+ print("βœ… Will use: HuggingFace Inference API (auto-detected)")
82
+ elif os.getenv("OPENAI_API_KEY"):
83
+ print("βœ… Will use: OpenAI (auto-detected)")
84
+ elif os.getenv("ANTHROPIC_API_KEY"):
85
+ print("βœ… Will use: Anthropic Claude (auto-detected)")
86
+ else:
87
+ print("⚠️ Unknown configuration")
88
+
89
+ print()
90
+
91
+ # Check if in HuggingFace Spaces
92
+ if os.getenv("SPACE_ID"):
93
+ print(f"πŸ“ Running in HuggingFace Space: {os.getenv('SPACE_ID')}")
94
+ if not (os.getenv("HF_TOKEN") or os.getenv("HUGGINGFACE_API_KEY")):
95
+ print("⚠️ WARNING: In HF Space but no HF token found!")
96
+ print(" This Space might be PRIVATE. Consider:")
97
+ print(" 1. Making it PUBLIC, or")
98
+ print(" 2. Adding HUGGINGFACE_API_KEY manually")
99
+ else:
100
+ print("πŸ“ Running locally (not in HuggingFace Space)")
101
+
102
+ print()
103
+ print("-" * 70)
104
+ print()
105
+
106
+ # Check imports
107
+ print("Checking Python Dependencies:")
108
+ print("-" * 70)
109
+
110
+ dependencies = {
111
+ "gradio": "Gradio UI framework",
112
+ "requests": "HTTP library",
113
+ "pandas": "Data processing (optional)"
114
+ }
115
+
116
+ all_imports_ok = True
117
+ for module, description in dependencies.items():
118
+ try:
119
+ __import__(module)
120
+ print(f"βœ… {module:<20} OK")
121
+ except ImportError:
122
+ print(f"❌ {module:<20} NOT INSTALLED")
123
+ all_imports_ok = False
124
+
125
+ print()
126
+
127
+ if not all_imports_ok:
128
+ print("⚠️ Some dependencies missing. Run:")
129
+ print(" pip install -r requirements.txt")
130
+ print()
131
+
132
+ print("="*70)
133
+
134
+ if has_any_key and all_imports_ok:
135
+ print("βœ… Environment looks good! You should be able to run the app.")
136
+ print()
137
+ print("Start the app with:")
138
+ print(" python app.py")
139
+ return True
140
+ else:
141
+ print("❌ Environment has issues. Fix the problems above.")
142
+ return False
143
+
144
+
145
+ if __name__ == "__main__":
146
+ success = check_environment()
147
+ sys.exit(0 if success else 1)
test_hf_backend.py ADDED
@@ -0,0 +1,109 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ Quick test to verify HuggingFace Inference API works
3
+ Run this on HF Spaces to debug LLM connection issues
4
+ """
5
+ import os
6
+ from llm_backend import LLMBackend, LLMProvider
7
+
8
+ def test_hf_connection():
9
+ """Test HuggingFace Inference API connection"""
10
+ print("="*60)
11
+ print("Testing HuggingFace Inference API Connection")
12
+ print("="*60)
13
+
14
+ # Check for token
15
+ hf_token = os.getenv("HF_TOKEN") or os.getenv("HUGGINGFACE_API_KEY")
16
+ if hf_token:
17
+ print(f"βœ“ HF Token found: {hf_token[:10]}...")
18
+ else:
19
+ print("βœ— No HF Token found")
20
+ print(" Set HF_TOKEN or HUGGINGFACE_API_KEY environment variable")
21
+ return False
22
+
23
+ # Initialize backend
24
+ try:
25
+ backend = LLMBackend(provider=LLMProvider.HUGGINGFACE, api_key=hf_token)
26
+ print(f"βœ“ Backend initialized")
27
+ print(f" Provider: {backend.provider.value}")
28
+ print(f" Model: {backend.model}")
29
+ print(f" API URL: {backend.api_url}")
30
+ except Exception as e:
31
+ print(f"βœ— Backend initialization failed: {e}")
32
+ return False
33
+
34
+ # Test simple generation
35
+ print("\nTesting simple message generation...")
36
+ messages = [
37
+ {"role": "system", "content": "You are a helpful assistant."},
38
+ {"role": "user", "content": "Say 'Hello, World!' and nothing else."}
39
+ ]
40
+
41
+ try:
42
+ response = backend.generate(messages, max_tokens=50, temperature=0.3)
43
+ print(f"βœ“ Generation successful!")
44
+ print(f" Response: {response[:100]}")
45
+ return True
46
+ except Exception as e:
47
+ print(f"βœ— Generation failed: {e}")
48
+ import traceback
49
+ traceback.print_exc()
50
+ return False
51
+
52
+
53
+ def test_survey_generation():
54
+ """Test actual survey generation"""
55
+ print("\n" + "="*60)
56
+ print("Testing Survey Generation")
57
+ print("="*60)
58
+
59
+ from survey_generator import SurveyGenerator
60
+
61
+ hf_token = os.getenv("HF_TOKEN") or os.getenv("HUGGINGFACE_API_KEY")
62
+ if not hf_token:
63
+ print("βœ— No HF Token, skipping")
64
+ return False
65
+
66
+ try:
67
+ backend = LLMBackend(provider=LLMProvider.HUGGINGFACE, api_key=hf_token)
68
+ generator = SurveyGenerator(backend)
69
+
70
+ print("Generating a small test survey...")
71
+ survey = generator.generate_survey(
72
+ outline="Understand user satisfaction with a mobile app",
73
+ survey_type="qualitative",
74
+ num_questions=3,
75
+ target_audience="Mobile app users"
76
+ )
77
+
78
+ print(f"βœ“ Survey generated!")
79
+ print(f" Title: {survey.get('title', 'N/A')}")
80
+ print(f" Questions: {len(survey.get('questions', []))}")
81
+ return True
82
+
83
+ except Exception as e:
84
+ print(f"βœ— Survey generation failed: {e}")
85
+ import traceback
86
+ traceback.print_exc()
87
+ return False
88
+
89
+
90
+ if __name__ == "__main__":
91
+ # Run tests
92
+ connection_ok = test_hf_connection()
93
+
94
+ if connection_ok:
95
+ survey_ok = test_survey_generation()
96
+ if survey_ok:
97
+ print("\n" + "="*60)
98
+ print("βœ… ALL TESTS PASSED!")
99
+ print("="*60)
100
+ else:
101
+ print("\n" + "="*60)
102
+ print("⚠️ Basic connection works but survey generation failed")
103
+ print(" This may be due to model limitations or rate limits")
104
+ print("="*60)
105
+ else:
106
+ print("\n" + "="*60)
107
+ print("❌ CONNECTION FAILED")
108
+ print(" Check your HF_TOKEN and network connectivity")
109
+ print("="*60)