Spaces:
Sleeping
Sleeping
| title: ConversAI - Qualitative Research Assistant | |
| emoji: π¬ | |
| colorFrom: blue | |
| colorTo: purple | |
| sdk: gradio | |
| sdk_version: 5.45.0 | |
| app_file: app.py | |
| pinned: false | |
| license: mit | |
| # ConversAI - AI-Powered Qualitative Research Assistant | |
| Battle the blank page, reach global audiences, and uncover insights with AI assistance. | |
| --- | |
| > **β¨ NEW (Nov 2025):** Now uses **Microsoft Phi-3** - Faster, reliable, and **completely FREE** on HuggingFace! | |
| --- | |
| ## π Features | |
| ### π Survey Generation | |
| - Generate professional surveys from simple outlines | |
| - Follow industry best practices automatically | |
| - Choose from qualitative, quantitative, or mixed methods | |
| - Customize number of questions and target audience | |
| ### π Survey Translation | |
| - Translate surveys to 18+ languages | |
| - Maintain cultural appropriateness and meaning | |
| - Reach global audiences effortlessly | |
| - Batch translation support | |
| ### π Data Analysis | |
| - AI-assisted thematic analysis | |
| - Sentiment analysis and emotional insights | |
| - Automatic pattern and trend detection | |
| - Generate actionable insights and recommendations | |
| - Export detailed analysis reports | |
| ## π Quick Start | |
| **On HuggingFace Spaces:** Works immediately with zero configuration! Uses the free HF Inference API. | |
| **Workflow:** | |
| 1. **Generate a Survey**: Start with an outline or topic description | |
| 2. **Translate**: Select target languages to reach global audiences | |
| 3. **Collect Responses**: Use the generated survey with your participants | |
| 4. **Analyze**: Upload responses to uncover key findings and trends | |
| ## π§ Configuration | |
| ### Default: HuggingFace Free Tier (Completely FREE!) | |
| **β¨ Zero configuration needed!** ConversAI works out-of-the-box on HuggingFace Spaces. | |
| **Default Model:** Microsoft Phi-3-mini-4k-instruct | |
| - β **100% Free** - No API keys, no costs, ever | |
| - β **Fast** - Optimized for speed (10-30 seconds) | |
| - β **Ungated** - No approval needed, works immediately | |
| - β **Good Quality** - Suitable for professional survey work | |
| - β **Reliable** - Stable on HuggingFace Inference API | |
| **Setup for PUBLIC Spaces (Recommended):** | |
| - Just deploy - uses built-in `HF_TOKEN` automatically | |
| - **No configuration required at all!** | |
| **Setup for PRIVATE Spaces:** | |
| 1. Go to https://huggingface.co/settings/tokens | |
| 2. Copy your token (read permission is enough) | |
| 3. Add in Space Settings β Variables: | |
| - Name: `HUGGINGFACE_API_KEY` | |
| - Value: your_token_here | |
| 4. Restart Space | |
| ### Alternative Free Models | |
| You can try different free models by setting the `LLM_MODEL` environment variable: | |
| **Recommended Free Models:** | |
| | Model | Best For | Speed | Quality | Ungated | | |
| |-------|----------|-------|---------|---------| | |
| | **microsoft/Phi-3-mini-4k-instruct** (default) | General use, balanced | β‘β‘ Fast | βββ Good | β Yes | | |
| | **google/flan-t5-xxl** | Fast responses, instructions | β‘β‘β‘ Very Fast | ββ Decent | β Yes | | |
| | **mistralai/Mistral-7B-Instruct-v0.2** | Best quality (slower) | β‘ Slower | ββββ Excellent | β Yes | | |
| | **google/flan-t5-xl** | Maximum speed | β‘β‘β‘ Very Fast | ββ Decent | β Yes | | |
| | **google/flan-ul2** | Long contexts | β‘β‘ Fast | βββ Good | β Yes | | |
| **To change model:** | |
| ```bash | |
| # In Space Settings β Variables | |
| LLM_MODEL=mistralai/Mistral-7B-Instruct-v0.2 | |
| ``` | |
| **Or in code:** | |
| ```python | |
| import os | |
| os.environ["LLM_MODEL"] = "google/flan-t5-xxl" | |
| ``` | |
| ### Tips for Best Performance with Free Models | |
| 1. **Keep prompts concise** - Shorter outlines = faster generation | |
| 2. **Request fewer questions** - Start with 5-10 instead of 20+ | |
| 3. **Translate one language at a time** - Better reliability on free tier | |
| 4. **Be patient on first request** - Models need to "warm up" (30-60 sec) | |
| 5. **Use during off-peak hours** - Less queue time, faster responses | |
| 6. **Try different models** - Some work better for specific tasks | |
| ## π¦ Installation | |
| ```bash | |
| # Install dependencies | |
| pip install -r requirements.txt | |
| # Check environment setup (optional but recommended) | |
| python check_env.py | |
| # Run the app | |
| python app.py | |
| ``` | |
| ## ποΈ Architecture | |
| ConversAI is built with a modular architecture: | |
| - **llm_backend.py** - Unified LLM interface supporting multiple providers | |
| - **survey_generator.py** - AI-powered survey generation | |
| - **survey_translator.py** - Multi-language translation engine | |
| - **data_analyzer.py** - Qualitative data analysis and insights | |
| - **app.py** - Gradio-based web interface | |
| - **export_utils.py** - Export to JSON, CSV, Markdown | |
| ## π Data Privacy | |
| - All processing is done through your configured LLM provider | |
| - No data is stored permanently by this application | |
| - Survey data and responses remain in your control | |
| - Suitable for sensitive research projects | |
| ## π€ Contributing | |
| Contributions are welcome! This is a production-grade application designed for real-world qualitative research. | |
| ## π License | |
| MIT License - Feel free to use for research and commercial purposes. | |
| --- | |
| ## π Documentation | |
| **New to ConversAI?** Start with **[USER_GUIDE.md](USER_GUIDE.md)** for a complete walkthrough. | |
| **Full Documentation Index:** See **[DOCUMENTATION_INDEX.md](DOCUMENTATION_INDEX.md)** for all available guides. | |
| **Quick Links:** | |
| - π [Complete User Guide](USER_GUIDE.md) - How to use ConversAI (START HERE) | |
| - β‘ [Quick Start for HF Spaces](QUICK_START_HF_SPACES.md) - 5-minute deployment | |
| - π§ [Troubleshooting](TROUBLESHOOTING.md) - Common issues and solutions | |
| - π [Deployment Guide](DEPLOYMENT.md) - Detailed deployment instructions | |
| - π [Usage Guide](USAGE_GUIDE.md) - Technical usage documentation | |
| - π [Free Models Guide](FREE_MODELS.md) - Best free models to use | |
| **Diagnostic Tools:** | |
| - Run `python check_env.py` - Check your environment setup | |
| - Run `python test_hf_backend.py` - Test HuggingFace connection | |
| --- | |
| Built with β€οΈ using Gradio and state-of-the-art open-source LLMs | |