ProjectEcho / README_OLD.md
jmisak's picture
Upload 8 files
d4abd8e verified
---
title: ConversAI - Qualitative Research Assistant
emoji: πŸ”¬
colorFrom: blue
colorTo: purple
sdk: gradio
sdk_version: 5.45.0
app_file: app.py
pinned: false
license: mit
---
# ConversAI - AI-Powered Qualitative Research Assistant
Battle the blank page, reach global audiences, and uncover insights with AI assistance.
---
> **✨ NEW (Nov 2025):** Now uses **Microsoft Phi-3** - Faster, reliable, and **completely FREE** on HuggingFace!
---
## 🌟 Features
### πŸ“ Survey Generation
- Generate professional surveys from simple outlines
- Follow industry best practices automatically
- Choose from qualitative, quantitative, or mixed methods
- Customize number of questions and target audience
### 🌍 Survey Translation
- Translate surveys to 18+ languages
- Maintain cultural appropriateness and meaning
- Reach global audiences effortlessly
- Batch translation support
### πŸ“Š Data Analysis
- AI-assisted thematic analysis
- Sentiment analysis and emotional insights
- Automatic pattern and trend detection
- Generate actionable insights and recommendations
- Export detailed analysis reports
## πŸš€ Quick Start
**On HuggingFace Spaces:** Works immediately with zero configuration! Uses the free HF Inference API.
**Workflow:**
1. **Generate a Survey**: Start with an outline or topic description
2. **Translate**: Select target languages to reach global audiences
3. **Collect Responses**: Use the generated survey with your participants
4. **Analyze**: Upload responses to uncover key findings and trends
## πŸ”§ Configuration
### Default: HuggingFace Free Tier (Completely FREE!)
**✨ Zero configuration needed!** ConversAI works out-of-the-box on HuggingFace Spaces.
**Default Model:** Microsoft Phi-3-mini-4k-instruct
- βœ… **100% Free** - No API keys, no costs, ever
- βœ… **Fast** - Optimized for speed (10-30 seconds)
- βœ… **Ungated** - No approval needed, works immediately
- βœ… **Good Quality** - Suitable for professional survey work
- βœ… **Reliable** - Stable on HuggingFace Inference API
**Setup for PUBLIC Spaces (Recommended):**
- Just deploy - uses built-in `HF_TOKEN` automatically
- **No configuration required at all!**
**Setup for PRIVATE Spaces:**
1. Go to https://huggingface.co/settings/tokens
2. Copy your token (read permission is enough)
3. Add in Space Settings β†’ Variables:
- Name: `HUGGINGFACE_API_KEY`
- Value: your_token_here
4. Restart Space
### Alternative Free Models
You can try different free models by setting the `LLM_MODEL` environment variable:
**Recommended Free Models:**
| Model | Best For | Speed | Quality | Ungated |
|-------|----------|-------|---------|---------|
| **microsoft/Phi-3-mini-4k-instruct** (default) | General use, balanced | ⚑⚑ Fast | ⭐⭐⭐ Good | βœ… Yes |
| **google/flan-t5-xxl** | Fast responses, instructions | ⚑⚑⚑ Very Fast | ⭐⭐ Decent | βœ… Yes |
| **mistralai/Mistral-7B-Instruct-v0.2** | Best quality (slower) | ⚑ Slower | ⭐⭐⭐⭐ Excellent | βœ… Yes |
| **google/flan-t5-xl** | Maximum speed | ⚑⚑⚑ Very Fast | ⭐⭐ Decent | βœ… Yes |
| **google/flan-ul2** | Long contexts | ⚑⚑ Fast | ⭐⭐⭐ Good | βœ… Yes |
**To change model:**
```bash
# In Space Settings β†’ Variables
LLM_MODEL=mistralai/Mistral-7B-Instruct-v0.2
```
**Or in code:**
```python
import os
os.environ["LLM_MODEL"] = "google/flan-t5-xxl"
```
### Tips for Best Performance with Free Models
1. **Keep prompts concise** - Shorter outlines = faster generation
2. **Request fewer questions** - Start with 5-10 instead of 20+
3. **Translate one language at a time** - Better reliability on free tier
4. **Be patient on first request** - Models need to "warm up" (30-60 sec)
5. **Use during off-peak hours** - Less queue time, faster responses
6. **Try different models** - Some work better for specific tasks
## πŸ“¦ Installation
```bash
# Install dependencies
pip install -r requirements.txt
# Check environment setup (optional but recommended)
python check_env.py
# Run the app
python app.py
```
## πŸ—οΈ Architecture
ConversAI is built with a modular architecture:
- **llm_backend.py** - Unified LLM interface supporting multiple providers
- **survey_generator.py** - AI-powered survey generation
- **survey_translator.py** - Multi-language translation engine
- **data_analyzer.py** - Qualitative data analysis and insights
- **app.py** - Gradio-based web interface
- **export_utils.py** - Export to JSON, CSV, Markdown
## πŸ“„ Data Privacy
- All processing is done through your configured LLM provider
- No data is stored permanently by this application
- Survey data and responses remain in your control
- Suitable for sensitive research projects
## 🀝 Contributing
Contributions are welcome! This is a production-grade application designed for real-world qualitative research.
## πŸ“ License
MIT License - Feel free to use for research and commercial purposes.
---
## πŸ“š Documentation
**New to ConversAI?** Start with **[USER_GUIDE.md](USER_GUIDE.md)** for a complete walkthrough.
**Full Documentation Index:** See **[DOCUMENTATION_INDEX.md](DOCUMENTATION_INDEX.md)** for all available guides.
**Quick Links:**
- πŸ“– [Complete User Guide](USER_GUIDE.md) - How to use ConversAI (START HERE)
- ⚑ [Quick Start for HF Spaces](QUICK_START_HF_SPACES.md) - 5-minute deployment
- πŸ”§ [Troubleshooting](TROUBLESHOOTING.md) - Common issues and solutions
- πŸš€ [Deployment Guide](DEPLOYMENT.md) - Detailed deployment instructions
- πŸ“‹ [Usage Guide](USAGE_GUIDE.md) - Technical usage documentation
- πŸ†“ [Free Models Guide](FREE_MODELS.md) - Best free models to use
**Diagnostic Tools:**
- Run `python check_env.py` - Check your environment setup
- Run `python test_hf_backend.py` - Test HuggingFace connection
---
Built with ❀️ using Gradio and state-of-the-art open-source LLMs