Spaces:
Sleeping
Sleeping
File size: 7,020 Bytes
196c707 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 |
# ConversAI Usage Guide
## Quick Start
### 1. Installation
```bash
# Clone or download the repository
cd ConversAI
# Install dependencies
pip install -r requirements.txt
```
### 2. Configuration
ConversAI supports multiple LLM providers. Choose one and configure:
#### Option A: HuggingFace (Recommended for HF Spaces)
```bash
export HUGGINGFACE_API_KEY="your_hf_token_here"
export LLM_PROVIDER="huggingface"
```
#### Option B: OpenAI
```bash
export OPENAI_API_KEY="your_openai_key_here"
export LLM_PROVIDER="openai"
```
#### Option C: Anthropic
```bash
export ANTHROPIC_API_KEY="your_anthropic_key_here"
export LLM_PROVIDER="anthropic"
```
#### Option D: Local LM Studio
```bash
export LLM_PROVIDER="lm_studio"
export LM_STUDIO_URL="http://localhost:1234/v1/chat/completions"
```
### 3. Run the Application
```bash
python app.py
```
The app will be available at `http://localhost:7860`
## Features Guide
### π Survey Generation
Generate professional surveys from simple outlines.
**Steps:**
1. Navigate to the "Generate Survey" tab
2. Enter your research outline or topic description
- Example: "I want to understand patient experiences with a new diabetes medication"
3. Select survey type: Qualitative, Quantitative, or Mixed
4. Set number of questions (5-25 recommended)
5. Specify your target audience
6. Click "Generate Survey"
**Best Practices:**
- Be specific about your research goals
- Mention key topics you want to explore
- Include context about your target respondents
- Start with 10-15 questions for most surveys
**Output:**
- Formatted survey preview
- Downloadable JSON file with full survey data
- Questions follow industry best practices
- Includes introduction and closing messages
### π Survey Translation
Translate your surveys to reach global audiences.
**Steps:**
1. Generate a survey first (or have one ready)
2. Navigate to the "Translate Survey" tab
3. Select target language(s) from the checkbox list
4. Click "Translate Survey"
**Supported Languages:**
- Spanish, French, German, Portuguese
- Chinese, Japanese, Korean
- Arabic, Hindi, Russian
- And 8+ more languages
**Features:**
- Maintains cultural appropriateness
- Preserves question intent and meaning
- Handles multiple languages in one batch
- Exports all translations in a single file
**Tips:**
- Translate to multiple similar languages to compare phrasing
- Use back-translation to verify accuracy
- Consider cultural context for sensitive topics
### π Data Analysis
Uncover insights from your survey responses.
**Steps:**
1. Navigate to the "Analyze Data" tab
2. Prepare your responses in JSON format:
```json
[
{"q1": "response 1", "q2": "response 2"},
{"q1": "response 1", "q2": "response 2"}
]
```
3. Optionally include questions for context
4. Click "Load Example" to see format
5. Click "Analyze Data"
**Analysis Includes:**
- **Executive Summary** - High-level overview
- **Themes** - Main topics identified in responses
- **Sentiment Analysis** - Emotional tone and distribution
- **Key Insights** - Actionable findings
- **Statistics** - Response metrics
**Output Formats:**
- Markdown report (for viewing)
- JSON file (for further processing)
- Both include complete analysis results
**Pro Tips:**
- Minimum 10-20 responses for meaningful analysis
- Include diverse perspectives for richer insights
- Provide questions for better context
- Export results for presentations
## File Formats
### Survey JSON Format
```json
{
"title": "Survey Title",
"introduction": "Welcome message",
"questions": [
{
"id": 1,
"question_text": "Your question here?",
"question_type": "open_ended",
"required": true,
"help_text": "Optional clarification"
}
],
"closing": "Thank you message"
}
```
### Responses JSON Format
```json
[
{
"q1": "First question response",
"q2": "Second question response",
"q3": "Third question response"
},
{
"q1": "Another respondent's answer",
"q2": "Their second answer",
"q3": "Their third answer"
}
]
```
## Deployment to HuggingFace Spaces
1. Create a new Space on HuggingFace
2. Upload all `.py` files and `requirements.txt`
3. Upload `README.md` with the frontmatter
4. Set environment variables in Space settings:
- Add `HF_TOKEN` (automatically available)
- Or add API keys for other providers
5. Space will auto-deploy!
## Troubleshooting
### Issue: "LLM generation failed"
**Solutions:**
- Check your API key is set correctly
- Verify you have credits/quota with your provider
- Try a different provider
- Check network connectivity
### Issue: "Translation failed"
**Solutions:**
- Ensure survey was generated first
- Check API key and quota
- Try translating to fewer languages at once
- Verify the survey data is valid
### Issue: "Analysis returned no results"
**Solutions:**
- Check JSON format is valid
- Ensure responses is a list/array
- Provide at least 3-5 responses
- Check LLM provider is working
### Issue: "Module import errors"
**Solutions:**
```bash
pip install -r requirements.txt --upgrade
```
## API Usage (Advanced)
You can also use the modules programmatically:
```python
from llm_backend import LLMBackend, LLMProvider
from survey_generator import SurveyGenerator
# Initialize
backend = LLMBackend(provider=LLMProvider.OPENAI)
generator = SurveyGenerator(backend)
# Generate survey
survey = generator.generate_survey(
outline="Study user satisfaction with mobile apps",
survey_type="qualitative",
num_questions=10,
target_audience="Mobile app users aged 18-35"
)
print(survey)
```
## Best Practices
### For Survey Generation:
- Start with clear research objectives
- Be specific about your target audience
- Review and refine generated questions
- Test with a small pilot group first
### For Translation:
- Verify translations with native speakers
- Consider regional language variations
- Test cultural appropriateness
- Use back-translation for validation
### For Analysis:
- Collect sufficient responses (20+ ideal)
- Ensure response quality
- Combine with quantitative data when possible
- Review AI insights critically
## Support
For issues, questions, or contributions:
- Check the README.md
- Review this usage guide
- Open an issue on GitHub
- Contact the development team
## Tips for Production Use
1. **Data Privacy**: Review your LLM provider's data policy
2. **API Costs**: Monitor usage to control costs
3. **Rate Limits**: Be aware of provider rate limits
4. **Validation**: Always review AI-generated content
5. **Backup**: Save generated surveys and analyses
6. **Version Control**: Track survey versions
7. **Ethics**: Ensure informed consent from participants
---
Happy researching! π¬
|