Spaces:
Sleeping
A newer version of the Gradio SDK is available:
6.2.0
title: Project Echo - Qualitative/Quantitative Research Assistant
emoji: π¬
colorFrom: blue
colorTo: purple
sdk: gradio
sdk_version: 5.49.1
app_file: app.py
pinned: false
license: mit
ConversAI - AI-Powered Qualitative Research Assistant
Battle the blank page, reach global audiences, and uncover insights with AI assistance.
β¨ UPDATED (Nov 2025): Now uses local transformers with Microsoft Phi-2 - Fast, contextual, and completely FREE! No API dependencies, runs directly on HuggingFace Spaces. Generates actual topic-specific questions (not generic templates).
π Features
π Survey Generation
- Generate professional surveys from simple outlines
- Follow industry best practices automatically
- Choose from qualitative, quantitative, or mixed methods
- Customize number of questions and target audience
π Survey Translation
- Translate surveys to 18+ languages
- Maintain cultural appropriateness and meaning
- Reach global audiences effortlessly
- Batch translation support
π Data Analysis
- AI-assisted thematic analysis
- Sentiment analysis and emotional insights
- Automatic pattern and trend detection
- Generate actionable insights and recommendations
- Export detailed analysis reports
π¬ Conversational Research
- Design custom conversation flows with scripted questions
- AI-moderated interviews with dynamic follow-up questions
- Real-time adaptation based on respondent answers
- Intelligent probing for deeper insights
- Automatic conversation summarization and export
- Export conversations as transcripts, JSON, or CSV
π Quick Start
On HuggingFace Spaces: Works immediately with zero configuration! Uses the free HF Inference API.
Workflow:
Static Surveys:
- Generate a Survey: Start with an outline or topic description
- Translate: Select target languages to reach global audiences
- Collect Responses: Use the generated survey with your participants
- Analyze: Upload responses to uncover key findings and trends
Conversational Research:
- Design Flow: Create a conversation flow with scripted questions
- Conduct Interview: AI moderator engages with respondents in real-time
- Export & Analyze: Export transcripts and analyze conversation insights
π§ Configuration
Default: Local Transformers (Completely FREE!)
β¨ Zero configuration needed! ConversAI works out-of-the-box on HuggingFace Spaces using local model loading.
Default Model: microsoft/phi-2
- β 100% Free - No API keys, no costs, ever
- β Excellent quality - 2.7GB causal language model, great at creative text generation
- β Good speed - Typically 5-10 seconds per request after initial load
- β No API dependencies - Runs entirely on your Space's compute
- β Private - All processing happens locally, nothing sent to external APIs
- β Contextual - Generates relevant, topic-specific questions (not generic)
Setup for HuggingFace Spaces:
- Just deploy - models download automatically on first run
- No API keys or tokens required!
- Models are cached after first download for faster subsequent loads
Alternative Free Models
You can try different free models by setting the LLM_MODEL environment variable:
Recommended Free Models (Local Transformers):
| Model | Best For | Speed | Quality | Model Size |
|---|---|---|---|---|
| TinyLlama/TinyLlama-1.1B-Chat-v1.0 | Quick testing | β‘β‘β‘ Very Fast | ββ Fair | 1.1GB |
| google/gemma-2b-it | Faster alternative | β‘β‘ Fast | βββ Good | 2GB |
| microsoft/phi-2 (default) | Recommended - best balance | β‘ Good | ββββ Excellent | 2.7GB |
| mistralai/Mistral-7B-Instruct-v0.2 | Maximum quality | β‘ Slower | βββββ Best | 7GB |
Note: These are causal language models (decoder-only) designed for text generation. Do NOT use Flan-T5 models - they copy examples instead of generating contextual questions.
To change model:
# In Space Settings β Variables
LLM_MODEL=google/gemma-2b-it # Faster alternative
# Or for maximum quality (requires more memory)
LLM_MODEL=mistralai/Mistral-7B-Instruct-v0.2
Why Local Transformers?
- β No API dependencies - runs entirely on your Space
- β No 404 errors - no network issues
- β Fast after loading - models cached in memory
- β Instruction-tuned - designed for following prompts
- β Privacy - all processing happens locally
Tips for Best Performance with Local Models
- Use Phi-2 (default) - Best balance of quality and resource usage
- First load takes time - Model downloads and loads (~2-3 minutes for Phi-2)
- Subsequent requests are fast - Model stays in memory (5-10 seconds)
- For maximum quality - Use Mistral-7B-Instruct (requires 8GB+ RAM)
- For faster loading - Use Gemma-2B-IT or TinyLlama (good quality, smaller)
- Avoid Flan-T5 models - They copy examples instead of generating contextual questions
- Be specific in outlines - More detail helps model generate better questions
π¦ Installation
# Install dependencies
pip install -r requirements.txt
# Check environment setup (optional but recommended)
python check_env.py
# Run the app
python app.py
ποΈ Architecture
ConversAI is built with a modular architecture:
- llm_backend.py - Unified LLM interface supporting multiple providers
- survey_generator.py - AI-powered survey generation
- survey_translator.py - Multi-language translation engine
- data_analyzer.py - Qualitative data analysis and insights
- conversation_flow.py - Conversation flow design and management
- conversation_session.py - Live conversation session tracking
- conversation_moderator.py - AI-powered interview moderator
- app.py - Gradio-based web interface
- export_utils.py - Export to JSON, CSV, Markdown
π Data Privacy
- All processing is done through your configured LLM provider
- No data is stored permanently by this application
- Survey data and responses remain in your control
- Suitable for sensitive research projects
π€ Contributing
Contributions are welcome! This is a production-grade application designed for real-world qualitative research.
π License
MIT License - Feel free to use for research and commercial purposes.
π Documentation
New to ConversAI? Start with USER_GUIDE.md for a complete walkthrough.
Quick Links:
- π Complete User Guide - How to use ConversAI (START HERE)
- β‘ Quick Start for HF Spaces - 5-minute deployment
- π§ Troubleshooting - Common issues and solutions
- π Free Models Guide - Best free models to use
Diagnostic Tools:
- Run
python check_env.py- Check your environment setup - Run
python test_hf_backend.py- Test HuggingFace connection
Built with β€οΈ using Gradio and state-of-the-art open-source LLMs