Spaces:
Sleeping
Sleeping
File size: 7,116 Bytes
7e46f1a |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 |
---
title: Project Echo - Qualitative/Quantitative Research Assistant
emoji: π¬
colorFrom: blue
colorTo: purple
sdk: gradio
sdk_version: 5.49.1
app_file: app.py
pinned: false
license: mit
---
# ConversAI - AI-Powered Qualitative Research Assistant
Battle the blank page, reach global audiences, and uncover insights with AI assistance.
---
> **β¨ UPDATED (Nov 2025):** Now uses **local transformers** with **Microsoft Phi-2** - Fast, contextual, and **completely FREE**! No API dependencies, runs directly on HuggingFace Spaces. Generates actual topic-specific questions (not generic templates).
---
## π Features
### π Survey Generation
- Generate professional surveys from simple outlines
- Follow industry best practices automatically
- Choose from qualitative, quantitative, or mixed methods
- Customize number of questions and target audience
### π Survey Translation
- Translate surveys to 18+ languages
- Maintain cultural appropriateness and meaning
- Reach global audiences effortlessly
- Batch translation support
### π Data Analysis
- AI-assisted thematic analysis
- Sentiment analysis and emotional insights
- Automatic pattern and trend detection
- Generate actionable insights and recommendations
- Export detailed analysis reports
### π¬ Conversational Research
- Design custom conversation flows with scripted questions
- AI-moderated interviews with dynamic follow-up questions
- Real-time adaptation based on respondent answers
- Intelligent probing for deeper insights
- Automatic conversation summarization and export
- Export conversations as transcripts, JSON, or CSV
## π Quick Start
**On HuggingFace Spaces:** Works immediately with zero configuration! Uses the free HF Inference API.
**Workflow:**
**Static Surveys:**
1. **Generate a Survey**: Start with an outline or topic description
2. **Translate**: Select target languages to reach global audiences
3. **Collect Responses**: Use the generated survey with your participants
4. **Analyze**: Upload responses to uncover key findings and trends
**Conversational Research:**
1. **Design Flow**: Create a conversation flow with scripted questions
2. **Conduct Interview**: AI moderator engages with respondents in real-time
3. **Export & Analyze**: Export transcripts and analyze conversation insights
## π§ Configuration
### Default: Local Transformers (Completely FREE!)
**β¨ Zero configuration needed!** ConversAI works out-of-the-box on HuggingFace Spaces using local model loading.
**Default Model:** microsoft/phi-2
- β
**100% Free** - No API keys, no costs, ever
- β
**Excellent quality** - 2.7GB causal language model, great at creative text generation
- β
**Good speed** - Typically 5-10 seconds per request after initial load
- β
**No API dependencies** - Runs entirely on your Space's compute
- β
**Private** - All processing happens locally, nothing sent to external APIs
- β
**Contextual** - Generates relevant, topic-specific questions (not generic)
**Setup for HuggingFace Spaces:**
- Just deploy - models download automatically on first run
- **No API keys or tokens required!**
- Models are cached after first download for faster subsequent loads
### Alternative Free Models
You can try different free models by setting the `LLM_MODEL` environment variable:
**Recommended Free Models (Local Transformers):**
| Model | Best For | Speed | Quality | Model Size |
|-------|----------|-------|---------|------------|
| **TinyLlama/TinyLlama-1.1B-Chat-v1.0** | Quick testing | β‘β‘β‘ Very Fast | ββ Fair | 1.1GB |
| **google/gemma-2b-it** | Faster alternative | β‘β‘ Fast | βββ Good | 2GB |
| **microsoft/phi-2** (default) | **Recommended** - best balance | β‘ Good | ββββ Excellent | 2.7GB |
| **mistralai/Mistral-7B-Instruct-v0.2** | Maximum quality | β‘ Slower | βββββ Best | 7GB |
**Note:** These are causal language models (decoder-only) designed for text generation. **Do NOT use Flan-T5 models** - they copy examples instead of generating contextual questions.
**To change model:**
```bash
# In Space Settings β Variables
LLM_MODEL=google/gemma-2b-it # Faster alternative
# Or for maximum quality (requires more memory)
LLM_MODEL=mistralai/Mistral-7B-Instruct-v0.2
```
**Why Local Transformers?**
- β
**No API dependencies** - runs entirely on your Space
- β
**No 404 errors** - no network issues
- β
**Fast after loading** - models cached in memory
- β
**Instruction-tuned** - designed for following prompts
- β
**Privacy** - all processing happens locally
### Tips for Best Performance with Local Models
1. **Use Phi-2 (default)** - Best balance of quality and resource usage
2. **First load takes time** - Model downloads and loads (~2-3 minutes for Phi-2)
3. **Subsequent requests are fast** - Model stays in memory (5-10 seconds)
4. **For maximum quality** - Use Mistral-7B-Instruct (requires 8GB+ RAM)
5. **For faster loading** - Use Gemma-2B-IT or TinyLlama (good quality, smaller)
6. **Avoid Flan-T5 models** - They copy examples instead of generating contextual questions
7. **Be specific in outlines** - More detail helps model generate better questions
## π¦ Installation
```bash
# Install dependencies
pip install -r requirements.txt
# Check environment setup (optional but recommended)
python check_env.py
# Run the app
python app.py
```
## ποΈ Architecture
ConversAI is built with a modular architecture:
- **llm_backend.py** - Unified LLM interface supporting multiple providers
- **survey_generator.py** - AI-powered survey generation
- **survey_translator.py** - Multi-language translation engine
- **data_analyzer.py** - Qualitative data analysis and insights
- **conversation_flow.py** - Conversation flow design and management
- **conversation_session.py** - Live conversation session tracking
- **conversation_moderator.py** - AI-powered interview moderator
- **app.py** - Gradio-based web interface
- **export_utils.py** - Export to JSON, CSV, Markdown
## π Data Privacy
- All processing is done through your configured LLM provider
- No data is stored permanently by this application
- Survey data and responses remain in your control
- Suitable for sensitive research projects
## π€ Contributing
Contributions are welcome! This is a production-grade application designed for real-world qualitative research.
## π License
MIT License - Feel free to use for research and commercial purposes.
---
## π Documentation
**New to ConversAI?** Start with **[USER_GUIDE.md](USER_GUIDE.md)** for a complete walkthrough.
**Quick Links:**
- π [Complete User Guide](USER_GUIDE.md) - How to use ConversAI (START HERE)
- β‘ [Quick Start for HF Spaces](QUICK_START_HF_SPACES.md) - 5-minute deployment
- π§ [Troubleshooting](TROUBLESHOOTING.md) - Common issues and solutions
- π [Free Models Guide](FREE_MODELS.md) - Best free models to use
**Diagnostic Tools:**
- Run `python check_env.py` - Check your environment setup
- Run `python test_hf_backend.py` - Test HuggingFace connection
---
Built with β€οΈ using Gradio and state-of-the-art open-source LLMs |