Spaces:
Sleeping
Sleeping
File size: 6,009 Bytes
d4abd8e |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 |
---
title: ConversAI - Qualitative Research Assistant
emoji: π¬
colorFrom: blue
colorTo: purple
sdk: gradio
sdk_version: 5.45.0
app_file: app.py
pinned: false
license: mit
---
# ConversAI - AI-Powered Qualitative Research Assistant
Battle the blank page, reach global audiences, and uncover insights with AI assistance.
---
> **β¨ NEW (Nov 2025):** Now uses **Microsoft Phi-3** - Faster, reliable, and **completely FREE** on HuggingFace!
---
## π Features
### π Survey Generation
- Generate professional surveys from simple outlines
- Follow industry best practices automatically
- Choose from qualitative, quantitative, or mixed methods
- Customize number of questions and target audience
### π Survey Translation
- Translate surveys to 18+ languages
- Maintain cultural appropriateness and meaning
- Reach global audiences effortlessly
- Batch translation support
### π Data Analysis
- AI-assisted thematic analysis
- Sentiment analysis and emotional insights
- Automatic pattern and trend detection
- Generate actionable insights and recommendations
- Export detailed analysis reports
## π Quick Start
**On HuggingFace Spaces:** Works immediately with zero configuration! Uses the free HF Inference API.
**Workflow:**
1. **Generate a Survey**: Start with an outline or topic description
2. **Translate**: Select target languages to reach global audiences
3. **Collect Responses**: Use the generated survey with your participants
4. **Analyze**: Upload responses to uncover key findings and trends
## π§ Configuration
### Default: HuggingFace Free Tier (Completely FREE!)
**β¨ Zero configuration needed!** ConversAI works out-of-the-box on HuggingFace Spaces.
**Default Model:** Microsoft Phi-3-mini-4k-instruct
- β
**100% Free** - No API keys, no costs, ever
- β
**Fast** - Optimized for speed (10-30 seconds)
- β
**Ungated** - No approval needed, works immediately
- β
**Good Quality** - Suitable for professional survey work
- β
**Reliable** - Stable on HuggingFace Inference API
**Setup for PUBLIC Spaces (Recommended):**
- Just deploy - uses built-in `HF_TOKEN` automatically
- **No configuration required at all!**
**Setup for PRIVATE Spaces:**
1. Go to https://huggingface.co/settings/tokens
2. Copy your token (read permission is enough)
3. Add in Space Settings β Variables:
- Name: `HUGGINGFACE_API_KEY`
- Value: your_token_here
4. Restart Space
### Alternative Free Models
You can try different free models by setting the `LLM_MODEL` environment variable:
**Recommended Free Models:**
| Model | Best For | Speed | Quality | Ungated |
|-------|----------|-------|---------|---------|
| **microsoft/Phi-3-mini-4k-instruct** (default) | General use, balanced | β‘β‘ Fast | βββ Good | β
Yes |
| **google/flan-t5-xxl** | Fast responses, instructions | β‘β‘β‘ Very Fast | ββ Decent | β
Yes |
| **mistralai/Mistral-7B-Instruct-v0.2** | Best quality (slower) | β‘ Slower | ββββ Excellent | β
Yes |
| **google/flan-t5-xl** | Maximum speed | β‘β‘β‘ Very Fast | ββ Decent | β
Yes |
| **google/flan-ul2** | Long contexts | β‘β‘ Fast | βββ Good | β
Yes |
**To change model:**
```bash
# In Space Settings β Variables
LLM_MODEL=mistralai/Mistral-7B-Instruct-v0.2
```
**Or in code:**
```python
import os
os.environ["LLM_MODEL"] = "google/flan-t5-xxl"
```
### Tips for Best Performance with Free Models
1. **Keep prompts concise** - Shorter outlines = faster generation
2. **Request fewer questions** - Start with 5-10 instead of 20+
3. **Translate one language at a time** - Better reliability on free tier
4. **Be patient on first request** - Models need to "warm up" (30-60 sec)
5. **Use during off-peak hours** - Less queue time, faster responses
6. **Try different models** - Some work better for specific tasks
## π¦ Installation
```bash
# Install dependencies
pip install -r requirements.txt
# Check environment setup (optional but recommended)
python check_env.py
# Run the app
python app.py
```
## ποΈ Architecture
ConversAI is built with a modular architecture:
- **llm_backend.py** - Unified LLM interface supporting multiple providers
- **survey_generator.py** - AI-powered survey generation
- **survey_translator.py** - Multi-language translation engine
- **data_analyzer.py** - Qualitative data analysis and insights
- **app.py** - Gradio-based web interface
- **export_utils.py** - Export to JSON, CSV, Markdown
## π Data Privacy
- All processing is done through your configured LLM provider
- No data is stored permanently by this application
- Survey data and responses remain in your control
- Suitable for sensitive research projects
## π€ Contributing
Contributions are welcome! This is a production-grade application designed for real-world qualitative research.
## π License
MIT License - Feel free to use for research and commercial purposes.
---
## π Documentation
**New to ConversAI?** Start with **[USER_GUIDE.md](USER_GUIDE.md)** for a complete walkthrough.
**Full Documentation Index:** See **[DOCUMENTATION_INDEX.md](DOCUMENTATION_INDEX.md)** for all available guides.
**Quick Links:**
- π [Complete User Guide](USER_GUIDE.md) - How to use ConversAI (START HERE)
- β‘ [Quick Start for HF Spaces](QUICK_START_HF_SPACES.md) - 5-minute deployment
- π§ [Troubleshooting](TROUBLESHOOTING.md) - Common issues and solutions
- π [Deployment Guide](DEPLOYMENT.md) - Detailed deployment instructions
- π [Usage Guide](USAGE_GUIDE.md) - Technical usage documentation
- π [Free Models Guide](FREE_MODELS.md) - Best free models to use
**Diagnostic Tools:**
- Run `python check_env.py` - Check your environment setup
- Run `python test_hf_backend.py` - Test HuggingFace connection
---
Built with β€οΈ using Gradio and state-of-the-art open-source LLMs
|