Spaces:
Sleeping
Sleeping
| # ConversAI Usage Guide | |
| ## Quick Start | |
| ### 1. Installation | |
| ```bash | |
| # Clone or download the repository | |
| cd ConversAI | |
| # Install dependencies | |
| pip install -r requirements.txt | |
| ``` | |
| ### 2. Configuration | |
| ConversAI supports multiple LLM providers. Choose one and configure: | |
| #### Option A: HuggingFace (Recommended for HF Spaces) | |
| ```bash | |
| export HUGGINGFACE_API_KEY="your_hf_token_here" | |
| export LLM_PROVIDER="huggingface" | |
| ``` | |
| #### Option B: OpenAI | |
| ```bash | |
| export OPENAI_API_KEY="your_openai_key_here" | |
| export LLM_PROVIDER="openai" | |
| ``` | |
| #### Option C: Anthropic | |
| ```bash | |
| export ANTHROPIC_API_KEY="your_anthropic_key_here" | |
| export LLM_PROVIDER="anthropic" | |
| ``` | |
| #### Option D: Local LM Studio | |
| ```bash | |
| export LLM_PROVIDER="lm_studio" | |
| export LM_STUDIO_URL="http://localhost:1234/v1/chat/completions" | |
| ``` | |
| ### 3. Run the Application | |
| ```bash | |
| python app.py | |
| ``` | |
| The app will be available at `http://localhost:7860` | |
| ## Features Guide | |
| ### π Survey Generation | |
| Generate professional surveys from simple outlines. | |
| **Steps:** | |
| 1. Navigate to the "Generate Survey" tab | |
| 2. Enter your research outline or topic description | |
| - Example: "I want to understand patient experiences with a new diabetes medication" | |
| 3. Select survey type: Qualitative, Quantitative, or Mixed | |
| 4. Set number of questions (5-25 recommended) | |
| 5. Specify your target audience | |
| 6. Click "Generate Survey" | |
| **Best Practices:** | |
| - Be specific about your research goals | |
| - Mention key topics you want to explore | |
| - Include context about your target respondents | |
| - Start with 10-15 questions for most surveys | |
| **Output:** | |
| - Formatted survey preview | |
| - Downloadable JSON file with full survey data | |
| - Questions follow industry best practices | |
| - Includes introduction and closing messages | |
| ### π Survey Translation | |
| Translate your surveys to reach global audiences. | |
| **Steps:** | |
| 1. Generate a survey first (or have one ready) | |
| 2. Navigate to the "Translate Survey" tab | |
| 3. Select target language(s) from the checkbox list | |
| 4. Click "Translate Survey" | |
| **Supported Languages:** | |
| - Spanish, French, German, Portuguese | |
| - Chinese, Japanese, Korean | |
| - Arabic, Hindi, Russian | |
| - And 8+ more languages | |
| **Features:** | |
| - Maintains cultural appropriateness | |
| - Preserves question intent and meaning | |
| - Handles multiple languages in one batch | |
| - Exports all translations in a single file | |
| **Tips:** | |
| - Translate to multiple similar languages to compare phrasing | |
| - Use back-translation to verify accuracy | |
| - Consider cultural context for sensitive topics | |
| ### π Data Analysis | |
| Uncover insights from your survey responses. | |
| **Steps:** | |
| 1. Navigate to the "Analyze Data" tab | |
| 2. Prepare your responses in JSON format: | |
| ```json | |
| [ | |
| {"q1": "response 1", "q2": "response 2"}, | |
| {"q1": "response 1", "q2": "response 2"} | |
| ] | |
| ``` | |
| 3. Optionally include questions for context | |
| 4. Click "Load Example" to see format | |
| 5. Click "Analyze Data" | |
| **Analysis Includes:** | |
| - **Executive Summary** - High-level overview | |
| - **Themes** - Main topics identified in responses | |
| - **Sentiment Analysis** - Emotional tone and distribution | |
| - **Key Insights** - Actionable findings | |
| - **Statistics** - Response metrics | |
| **Output Formats:** | |
| - Markdown report (for viewing) | |
| - JSON file (for further processing) | |
| - Both include complete analysis results | |
| **Pro Tips:** | |
| - Minimum 10-20 responses for meaningful analysis | |
| - Include diverse perspectives for richer insights | |
| - Provide questions for better context | |
| - Export results for presentations | |
| ## File Formats | |
| ### Survey JSON Format | |
| ```json | |
| { | |
| "title": "Survey Title", | |
| "introduction": "Welcome message", | |
| "questions": [ | |
| { | |
| "id": 1, | |
| "question_text": "Your question here?", | |
| "question_type": "open_ended", | |
| "required": true, | |
| "help_text": "Optional clarification" | |
| } | |
| ], | |
| "closing": "Thank you message" | |
| } | |
| ``` | |
| ### Responses JSON Format | |
| ```json | |
| [ | |
| { | |
| "q1": "First question response", | |
| "q2": "Second question response", | |
| "q3": "Third question response" | |
| }, | |
| { | |
| "q1": "Another respondent's answer", | |
| "q2": "Their second answer", | |
| "q3": "Their third answer" | |
| } | |
| ] | |
| ``` | |
| ## Deployment to HuggingFace Spaces | |
| 1. Create a new Space on HuggingFace | |
| 2. Upload all `.py` files and `requirements.txt` | |
| 3. Upload `README.md` with the frontmatter | |
| 4. Set environment variables in Space settings: | |
| - Add `HF_TOKEN` (automatically available) | |
| - Or add API keys for other providers | |
| 5. Space will auto-deploy! | |
| ## Troubleshooting | |
| ### Issue: "LLM generation failed" | |
| **Solutions:** | |
| - Check your API key is set correctly | |
| - Verify you have credits/quota with your provider | |
| - Try a different provider | |
| - Check network connectivity | |
| ### Issue: "Translation failed" | |
| **Solutions:** | |
| - Ensure survey was generated first | |
| - Check API key and quota | |
| - Try translating to fewer languages at once | |
| - Verify the survey data is valid | |
| ### Issue: "Analysis returned no results" | |
| **Solutions:** | |
| - Check JSON format is valid | |
| - Ensure responses is a list/array | |
| - Provide at least 3-5 responses | |
| - Check LLM provider is working | |
| ### Issue: "Module import errors" | |
| **Solutions:** | |
| ```bash | |
| pip install -r requirements.txt --upgrade | |
| ``` | |
| ## API Usage (Advanced) | |
| You can also use the modules programmatically: | |
| ```python | |
| from llm_backend import LLMBackend, LLMProvider | |
| from survey_generator import SurveyGenerator | |
| # Initialize | |
| backend = LLMBackend(provider=LLMProvider.OPENAI) | |
| generator = SurveyGenerator(backend) | |
| # Generate survey | |
| survey = generator.generate_survey( | |
| outline="Study user satisfaction with mobile apps", | |
| survey_type="qualitative", | |
| num_questions=10, | |
| target_audience="Mobile app users aged 18-35" | |
| ) | |
| print(survey) | |
| ``` | |
| ## Best Practices | |
| ### For Survey Generation: | |
| - Start with clear research objectives | |
| - Be specific about your target audience | |
| - Review and refine generated questions | |
| - Test with a small pilot group first | |
| ### For Translation: | |
| - Verify translations with native speakers | |
| - Consider regional language variations | |
| - Test cultural appropriateness | |
| - Use back-translation for validation | |
| ### For Analysis: | |
| - Collect sufficient responses (20+ ideal) | |
| - Ensure response quality | |
| - Combine with quantitative data when possible | |
| - Review AI insights critically | |
| ## Support | |
| For issues, questions, or contributions: | |
| - Check the README.md | |
| - Review this usage guide | |
| - Open an issue on GitHub | |
| - Contact the development team | |
| ## Tips for Production Use | |
| 1. **Data Privacy**: Review your LLM provider's data policy | |
| 2. **API Costs**: Monitor usage to control costs | |
| 3. **Rate Limits**: Be aware of provider rate limits | |
| 4. **Validation**: Always review AI-generated content | |
| 5. **Backup**: Save generated surveys and analyses | |
| 6. **Version Control**: Track survey versions | |
| 7. **Ethics**: Ensure informed consent from participants | |
| --- | |
| Happy researching! π¬ | |