ProjectEcho / README.md
jmisak's picture
Update README.md
7e46f1a verified

A newer version of the Gradio SDK is available: 6.2.0

Upgrade
metadata
title: Project Echo - Qualitative/Quantitative Research Assistant
emoji: πŸ”¬
colorFrom: blue
colorTo: purple
sdk: gradio
sdk_version: 5.49.1
app_file: app.py
pinned: false
license: mit

ConversAI - AI-Powered Qualitative Research Assistant

Battle the blank page, reach global audiences, and uncover insights with AI assistance.


✨ UPDATED (Nov 2025): Now uses local transformers with Microsoft Phi-2 - Fast, contextual, and completely FREE! No API dependencies, runs directly on HuggingFace Spaces. Generates actual topic-specific questions (not generic templates).


🌟 Features

πŸ“ Survey Generation

  • Generate professional surveys from simple outlines
  • Follow industry best practices automatically
  • Choose from qualitative, quantitative, or mixed methods
  • Customize number of questions and target audience

🌍 Survey Translation

  • Translate surveys to 18+ languages
  • Maintain cultural appropriateness and meaning
  • Reach global audiences effortlessly
  • Batch translation support

πŸ“Š Data Analysis

  • AI-assisted thematic analysis
  • Sentiment analysis and emotional insights
  • Automatic pattern and trend detection
  • Generate actionable insights and recommendations
  • Export detailed analysis reports

πŸ’¬ Conversational Research

  • Design custom conversation flows with scripted questions
  • AI-moderated interviews with dynamic follow-up questions
  • Real-time adaptation based on respondent answers
  • Intelligent probing for deeper insights
  • Automatic conversation summarization and export
  • Export conversations as transcripts, JSON, or CSV

πŸš€ Quick Start

On HuggingFace Spaces: Works immediately with zero configuration! Uses the free HF Inference API.

Workflow:

Static Surveys:

  1. Generate a Survey: Start with an outline or topic description
  2. Translate: Select target languages to reach global audiences
  3. Collect Responses: Use the generated survey with your participants
  4. Analyze: Upload responses to uncover key findings and trends

Conversational Research:

  1. Design Flow: Create a conversation flow with scripted questions
  2. Conduct Interview: AI moderator engages with respondents in real-time
  3. Export & Analyze: Export transcripts and analyze conversation insights

πŸ”§ Configuration

Default: Local Transformers (Completely FREE!)

✨ Zero configuration needed! ConversAI works out-of-the-box on HuggingFace Spaces using local model loading.

Default Model: microsoft/phi-2

  • βœ… 100% Free - No API keys, no costs, ever
  • βœ… Excellent quality - 2.7GB causal language model, great at creative text generation
  • βœ… Good speed - Typically 5-10 seconds per request after initial load
  • βœ… No API dependencies - Runs entirely on your Space's compute
  • βœ… Private - All processing happens locally, nothing sent to external APIs
  • βœ… Contextual - Generates relevant, topic-specific questions (not generic)

Setup for HuggingFace Spaces:

  • Just deploy - models download automatically on first run
  • No API keys or tokens required!
  • Models are cached after first download for faster subsequent loads

Alternative Free Models

You can try different free models by setting the LLM_MODEL environment variable:

Recommended Free Models (Local Transformers):

Model Best For Speed Quality Model Size
TinyLlama/TinyLlama-1.1B-Chat-v1.0 Quick testing ⚑⚑⚑ Very Fast ⭐⭐ Fair 1.1GB
google/gemma-2b-it Faster alternative ⚑⚑ Fast ⭐⭐⭐ Good 2GB
microsoft/phi-2 (default) Recommended - best balance ⚑ Good ⭐⭐⭐⭐ Excellent 2.7GB
mistralai/Mistral-7B-Instruct-v0.2 Maximum quality ⚑ Slower ⭐⭐⭐⭐⭐ Best 7GB

Note: These are causal language models (decoder-only) designed for text generation. Do NOT use Flan-T5 models - they copy examples instead of generating contextual questions.

To change model:

# In Space Settings β†’ Variables
LLM_MODEL=google/gemma-2b-it  # Faster alternative

# Or for maximum quality (requires more memory)
LLM_MODEL=mistralai/Mistral-7B-Instruct-v0.2

Why Local Transformers?

  • βœ… No API dependencies - runs entirely on your Space
  • βœ… No 404 errors - no network issues
  • βœ… Fast after loading - models cached in memory
  • βœ… Instruction-tuned - designed for following prompts
  • βœ… Privacy - all processing happens locally

Tips for Best Performance with Local Models

  1. Use Phi-2 (default) - Best balance of quality and resource usage
  2. First load takes time - Model downloads and loads (~2-3 minutes for Phi-2)
  3. Subsequent requests are fast - Model stays in memory (5-10 seconds)
  4. For maximum quality - Use Mistral-7B-Instruct (requires 8GB+ RAM)
  5. For faster loading - Use Gemma-2B-IT or TinyLlama (good quality, smaller)
  6. Avoid Flan-T5 models - They copy examples instead of generating contextual questions
  7. Be specific in outlines - More detail helps model generate better questions

πŸ“¦ Installation

# Install dependencies
pip install -r requirements.txt

# Check environment setup (optional but recommended)
python check_env.py

# Run the app
python app.py

πŸ—οΈ Architecture

ConversAI is built with a modular architecture:

  • llm_backend.py - Unified LLM interface supporting multiple providers
  • survey_generator.py - AI-powered survey generation
  • survey_translator.py - Multi-language translation engine
  • data_analyzer.py - Qualitative data analysis and insights
  • conversation_flow.py - Conversation flow design and management
  • conversation_session.py - Live conversation session tracking
  • conversation_moderator.py - AI-powered interview moderator
  • app.py - Gradio-based web interface
  • export_utils.py - Export to JSON, CSV, Markdown

πŸ“„ Data Privacy

  • All processing is done through your configured LLM provider
  • No data is stored permanently by this application
  • Survey data and responses remain in your control
  • Suitable for sensitive research projects

🀝 Contributing

Contributions are welcome! This is a production-grade application designed for real-world qualitative research.

πŸ“ License

MIT License - Feel free to use for research and commercial purposes.


πŸ“š Documentation

New to ConversAI? Start with USER_GUIDE.md for a complete walkthrough.

Quick Links:

Diagnostic Tools:

  • Run python check_env.py - Check your environment setup
  • Run python test_hf_backend.py - Test HuggingFace connection

Built with ❀️ using Gradio and state-of-the-art open-source LLMs