--- title: Project Echo - Qualitative/Quantitative Research Assistant emoji: ๐Ÿ”ฌ colorFrom: blue colorTo: purple sdk: gradio sdk_version: 5.49.1 app_file: app.py pinned: false license: mit --- # ConversAI - AI-Powered Qualitative Research Assistant Battle the blank page, reach global audiences, and uncover insights with AI assistance. --- > **โœจ UPDATED (Nov 2025):** Now uses **local transformers** with **Microsoft Phi-2** - Fast, contextual, and **completely FREE**! No API dependencies, runs directly on HuggingFace Spaces. Generates actual topic-specific questions (not generic templates). --- ## ๐ŸŒŸ Features ### ๐Ÿ“ Survey Generation - Generate professional surveys from simple outlines - Follow industry best practices automatically - Choose from qualitative, quantitative, or mixed methods - Customize number of questions and target audience ### ๐ŸŒ Survey Translation - Translate surveys to 18+ languages - Maintain cultural appropriateness and meaning - Reach global audiences effortlessly - Batch translation support ### ๐Ÿ“Š Data Analysis - AI-assisted thematic analysis - Sentiment analysis and emotional insights - Automatic pattern and trend detection - Generate actionable insights and recommendations - Export detailed analysis reports ### ๐Ÿ’ฌ Conversational Research - Design custom conversation flows with scripted questions - AI-moderated interviews with dynamic follow-up questions - Real-time adaptation based on respondent answers - Intelligent probing for deeper insights - Automatic conversation summarization and export - Export conversations as transcripts, JSON, or CSV ## ๐Ÿš€ Quick Start **On HuggingFace Spaces:** Works immediately with zero configuration! Uses the free HF Inference API. **Workflow:** **Static Surveys:** 1. **Generate a Survey**: Start with an outline or topic description 2. **Translate**: Select target languages to reach global audiences 3. **Collect Responses**: Use the generated survey with your participants 4. **Analyze**: Upload responses to uncover key findings and trends **Conversational Research:** 1. **Design Flow**: Create a conversation flow with scripted questions 2. **Conduct Interview**: AI moderator engages with respondents in real-time 3. **Export & Analyze**: Export transcripts and analyze conversation insights ## ๐Ÿ”ง Configuration ### Default: Local Transformers (Completely FREE!) **โœจ Zero configuration needed!** ConversAI works out-of-the-box on HuggingFace Spaces using local model loading. **Default Model:** microsoft/phi-2 - โœ… **100% Free** - No API keys, no costs, ever - โœ… **Excellent quality** - 2.7GB causal language model, great at creative text generation - โœ… **Good speed** - Typically 5-10 seconds per request after initial load - โœ… **No API dependencies** - Runs entirely on your Space's compute - โœ… **Private** - All processing happens locally, nothing sent to external APIs - โœ… **Contextual** - Generates relevant, topic-specific questions (not generic) **Setup for HuggingFace Spaces:** - Just deploy - models download automatically on first run - **No API keys or tokens required!** - Models are cached after first download for faster subsequent loads ### Alternative Free Models You can try different free models by setting the `LLM_MODEL` environment variable: **Recommended Free Models (Local Transformers):** | Model | Best For | Speed | Quality | Model Size | |-------|----------|-------|---------|------------| | **TinyLlama/TinyLlama-1.1B-Chat-v1.0** | Quick testing | โšกโšกโšก Very Fast | โญโญ Fair | 1.1GB | | **google/gemma-2b-it** | Faster alternative | โšกโšก Fast | โญโญโญ Good | 2GB | | **microsoft/phi-2** (default) | **Recommended** - best balance | โšก Good | โญโญโญโญ Excellent | 2.7GB | | **mistralai/Mistral-7B-Instruct-v0.2** | Maximum quality | โšก Slower | โญโญโญโญโญ Best | 7GB | **Note:** These are causal language models (decoder-only) designed for text generation. **Do NOT use Flan-T5 models** - they copy examples instead of generating contextual questions. **To change model:** ```bash # In Space Settings โ†’ Variables LLM_MODEL=google/gemma-2b-it # Faster alternative # Or for maximum quality (requires more memory) LLM_MODEL=mistralai/Mistral-7B-Instruct-v0.2 ``` **Why Local Transformers?** - โœ… **No API dependencies** - runs entirely on your Space - โœ… **No 404 errors** - no network issues - โœ… **Fast after loading** - models cached in memory - โœ… **Instruction-tuned** - designed for following prompts - โœ… **Privacy** - all processing happens locally ### Tips for Best Performance with Local Models 1. **Use Phi-2 (default)** - Best balance of quality and resource usage 2. **First load takes time** - Model downloads and loads (~2-3 minutes for Phi-2) 3. **Subsequent requests are fast** - Model stays in memory (5-10 seconds) 4. **For maximum quality** - Use Mistral-7B-Instruct (requires 8GB+ RAM) 5. **For faster loading** - Use Gemma-2B-IT or TinyLlama (good quality, smaller) 6. **Avoid Flan-T5 models** - They copy examples instead of generating contextual questions 7. **Be specific in outlines** - More detail helps model generate better questions ## ๐Ÿ“ฆ Installation ```bash # Install dependencies pip install -r requirements.txt # Check environment setup (optional but recommended) python check_env.py # Run the app python app.py ``` ## ๐Ÿ—๏ธ Architecture ConversAI is built with a modular architecture: - **llm_backend.py** - Unified LLM interface supporting multiple providers - **survey_generator.py** - AI-powered survey generation - **survey_translator.py** - Multi-language translation engine - **data_analyzer.py** - Qualitative data analysis and insights - **conversation_flow.py** - Conversation flow design and management - **conversation_session.py** - Live conversation session tracking - **conversation_moderator.py** - AI-powered interview moderator - **app.py** - Gradio-based web interface - **export_utils.py** - Export to JSON, CSV, Markdown ## ๐Ÿ“„ Data Privacy - All processing is done through your configured LLM provider - No data is stored permanently by this application - Survey data and responses remain in your control - Suitable for sensitive research projects ## ๐Ÿค Contributing Contributions are welcome! This is a production-grade application designed for real-world qualitative research. ## ๐Ÿ“ License MIT License - Feel free to use for research and commercial purposes. --- ## ๐Ÿ“š Documentation **New to ConversAI?** Start with **[USER_GUIDE.md](USER_GUIDE.md)** for a complete walkthrough. **Quick Links:** - ๐Ÿ“– [Complete User Guide](USER_GUIDE.md) - How to use ConversAI (START HERE) - โšก [Quick Start for HF Spaces](QUICK_START_HF_SPACES.md) - 5-minute deployment - ๐Ÿ”ง [Troubleshooting](TROUBLESHOOTING.md) - Common issues and solutions - ๐Ÿ†“ [Free Models Guide](FREE_MODELS.md) - Best free models to use **Diagnostic Tools:** - Run `python check_env.py` - Check your environment setup - Run `python test_hf_backend.py` - Test HuggingFace connection --- Built with โค๏ธ using Gradio and state-of-the-art open-source LLMs