Spaces:
Running
A newer version of the Gradio SDK is available:
6.1.0
title: ResearchAI
emoji: ποΈ
colorFrom: blue
colorTo: purple
sdk: gradio
sdk_version: 4.44.0
app_file: app.py
pinned: false
ποΈ Multi-Model Hierarchical Research System
A sophisticated hierarchical multi-agent research system with real-time progress tracking and live dashboard. Powered by multiple AI models (Qwen, Llama, Mistral) for comprehensive market research, competitive analysis, and strategic insights.
β¨ Features
π― Hierarchical Multi-Agent Architecture
Supervisor (Strategy)
β
βββ Researcher Agent π (Industry Leaders)
βββ Analyzer Agent β (Best Practices)
βββ Critic Agent π (Quality Review)
β
Synthesizer Agent π‘ (Recommendations)
π Real-Time Progress Tracking
- Live Dashboard - Watch research progress in real-time
- Phase-by-phase updates - See each agent's status
- Execution metrics - Track timing and performance
- Error handling - Graceful degradation with retry logic
π€ Multi-Model Support
- Qwen 2.5 7B - Fast & efficient analysis
- Qwen 2.5 72B - Most capable Qwen model
- Meta Llama 3.1 70B - Strong reasoning capabilities
- Mistral Large - Excellent analysis and synthesis
π Comprehensive Research
- Industry Leaders - Top 5 companies setting standards
- Best Practices - Proven methods and innovations
- Quality Review - Independent assessment and validation
- Strategic Recommendations - Actionable roadmap
π Rich Output
- Executive summaries with infographics
- Execution timelines and performance metrics
- Model assignment verification
- Search history and metadata
π Quick Start
1. Get HuggingFace API Token
Visit HuggingFace Settings:
- Click "New token"
- Select "Read" permission
- Copy the token (starts with
hf_...)
2. Set Environment Variable
# On Linux/Mac
export HF_TOKEN=hf_your_token_here
# On Windows (PowerShell)
$env:HF_TOKEN="hf_your_token_here"
# Or create .env file
echo "HF_TOKEN=hf_your_token_here" > .env
3. Install Dependencies
pip install -r requirements.txt
4. Run the Application
python app.py
The application will start on http://localhost:7860
π Usage Guide
Basic Research
Enter Research Topic
- Example: "AI project management tools"
- Example: "Sustainable fashion brands"
- Example: "Electric vehicle charging infrastructure"
Click "Start Research"
- Watch the Live Dashboard tab for real-time progress
- Each agent will execute in sequence
Review Results
- Summary: Executive overview and metadata
- Industry Leaders: Top 5 companies/products
- Best Practices: Proven strategies and innovations
- Quality Review: Independent assessment
- Recommendations: Strategic action plan
Advanced: Configure Models
Open "Configure AI Models" accordion
Select different models for each phase:
- Query Understanding
- Industry Leaders Research
- Best Practices Analysis
- Quality Review
- Recommendations Generation
Click "Start Research" with custom configuration
π Understanding the Output
Live Dashboard
Shows real-time progress as research happens:
π Research started!
π Topic: AI project management tools
π€ Models configured: 4 unique models
π PHASE 1: RESEARCHER AGENT - Industry Leaders
Model: Qwen/Qwen2.5-72B-Instruct
Status: β³ Running...
Status: β
Complete (24.5s)
β PHASE 2: ANALYZER AGENT - Best Practices
Model: Qwen/Qwen2.5-72B-Instruct
Status: β³ Running...
Status: β
Complete (25.2s)
[... more phases ...]
π RESEARCH COMPLETE!
π EXECUTION SUMMARY:
π Researcher: 24.5s [ββββββββββββββββββββββββββββ]
β Analyzer: 25.2s [ββββββββββββββββββββββββββββ]
π Critic: 14.8s [ββββββββββββββββββββββββββββ]
π‘ Synthesizer: 19.5s [ββββββββββββββββββββββββββββ]
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
π TOTAL TIME: 84.0s [ββββββββββββββββββββββββββββ]
Summary Tab
- Research overview with hierarchy diagram
- Agent execution status and timing
- Performance metrics
- Model assignment verification
- Research metadata
Industry Leaders Tab
- Top 5 companies/products
- Market positioning
- Key strengths
- Notable features
- Market metrics
Best Practices Tab
- Industry standards and frameworks
- Success stories and case studies
- Innovation patterns
- Implementation guidelines
- Key takeaways
Quality Review Tab
- Research completeness assessment
- Source quality evaluation
- Recency and relevance check
- Clarity and usefulness rating
- Improvement recommendations
- Overall quality scores
Recommendations Tab
- Executive summary
- Immediate actions (0-30 days)
- Short-term strategy (1-3 months)
- Long-term vision (3-12 months)
- Success metrics
- Risk mitigation strategies
- Resource requirements
- Next steps
ποΈ Architecture
Research Engine
- MultiModelResearchEngine: Orchestrates agent execution
- Model Caching: Efficient model instance management
- Retry Logic: Automatic fallback for API errors
- Web Search Integration: Real-time information gathering
Agent System
Researcher Agent π
- Identifies top industry leaders
- Analyzes market positioning
- Gathers competitive intelligence
Analyzer Agent β
- Researches best practices
- Identifies success patterns
- Documents innovations
Critic Agent π
- Quality assurance review
- Source validation
- Gap identification
Synthesizer Agent π‘
- Synthesizes all inputs
- Generates recommendations
- Creates action roadmap
State Management
- ResearchState: Tracks search history, model usage, dashboard updates
- Live Updates: Real-time progress tracking
- Caching: Results and model instances
π§ Configuration
Environment Variables
# Required
HF_TOKEN=hf_your_token_here
# Optional (for future extensions)
ANTHROPIC_API_KEY=your_anthropic_key
OPENAI_API_KEY=your_openai_key
Model Selection
Edit DEFAULT_PHASE_MODELS in app.py:
DEFAULT_PHASE_MODELS = {
"query_understanding": "qwen-2.5-7b",
"industry_leaders": "qwen-2.5-72b",
"best_practices": "qwen-2.5-72b",
"quality_review": "qwen-2.5-72b",
"recommendations": "qwen-2.5-72b"
}
Available Models
| Model | Provider | Speed | Quality | Cost |
|---|---|---|---|---|
| Qwen 2.5 7B | HuggingFace | β‘β‘β‘ | βββ | π° |
| Qwen 2.5 72B | HuggingFace | β‘β‘ | ββββ | π°π° |
| Llama 3.1 70B | HuggingFace | β‘β‘ | ββββ | π°π° |
| Mistral Large | HuggingFace | β‘β‘ | ββββ | π°π° |
π Expected Performance
Typical Execution Times
| Phase | Duration | Notes |
|---|---|---|
| Researcher Agent | 20-30s | Includes web search |
| Analyzer Agent | 20-30s | Includes web search |
| Critic Agent | 10-20s | No web search |
| Synthesizer Agent | 15-25s | No web search |
| Total | 80-120s | ~2 minutes |
Factors Affecting Speed
- Model size (larger = slower)
- Topic complexity
- Internet speed (affects web search)
- API response time
- System load
π Troubleshooting
"HF_TOKEN not found"
Solution: Set the environment variable:
export HF_TOKEN=hf_your_token_here
"API compatibility issue"
Solution: The system automatically falls back to compatible configurations. If issues persist:
- Try using Qwen models instead
- Simplify your research topic
- Check your internet connection
"Research stuck on Running"
Solution:
- Check internet connection
- Verify HF_TOKEN is valid
- Try a simpler topic
- Check HuggingFace API status
"Empty results"
Solution:
- Check the Live Dashboard for errors
- Verify all models are available
- Try with default model configuration
- Simplify the research topic
π¦ Deployment
Local Deployment
python app.py
HuggingFace Spaces
- Create new Space on HuggingFace
- Upload files:
app.pyrequirements.txt.env(with HF_TOKEN)
- HuggingFace automatically detects Gradio app
- Space launches automatically
Docker Deployment
FROM python:3.11-slim
WORKDIR /app
COPY requirements.txt .
RUN pip install -r requirements.txt
COPY app.py .
ENV HF_TOKEN=your_token_here
CMD ["python", "app.py"]
π File Structure
.
βββ app.py # Main application
βββ requirements.txt # Python dependencies
βββ agents_config.yaml # Agent configuration (optional)
βββ .env # Environment variables (local only)
βββ README.md # This file
π Security
API Key Management
- Never commit
.envfile to version control - Use HuggingFace Spaces secrets for deployment
- Rotate tokens regularly
- Use read-only tokens when possible
Data Privacy
- Research results are not stored
- Web searches are performed by the models
- No data is sent to external services except HuggingFace API
- Each session is independent
π API Reference
Main Function: run_research()
run_research(
topic: str,
model_query: str,
model_leaders: str,
model_practices: str,
model_quality: str,
model_recommendations: str,
progress: gr.Progress
) -> Tuple[str, str, str, str, str, str]
Parameters:
topic: Research topicmodel_*: Model selection for each phaseprogress: Gradio progress callback
Returns:
- Summary, Leaders, Practices, Review, Recommendations, Dashboard
Research Engine
engine = MultiModelResearchEngine(phase_models)
engine.research_industry_leaders(topic)
engine.research_best_practices(topic)
engine.quality_review(research_text)
engine.generate_recommendations(topic, research_text)
π€ Contributing
Contributions are welcome! Areas for enhancement:
- Additional model support
- Custom agent configurations
- Export formats (PDF, DOCX, etc.)
- Caching and persistence
- Advanced filtering options
π License
MIT License - See LICENSE file for details
π Support
Getting Help
- Check the Troubleshooting section
- Review the Live Dashboard for error messages
- Verify environment setup
- Check HuggingFace API status
Common Issues
Q: How long does research take? A: Typically 80-120 seconds (about 2 minutes) depending on topic complexity and model selection.
Q: Can I use different models for each phase? A: Yes! Use the "Configure AI Models" accordion to select different models.
Q: What if a model fails? A: The system has automatic retry logic and will gracefully degrade to compatible configurations.
Q: How many searches are performed? A: Typically 8-12 searches across the Researcher and Analyzer agents.
Q: Can I export the results? A: Results are displayed in markdown format and can be copied. Future versions will support PDF/DOCX export.
π Learning Resources
π Roadmap
Upcoming Features
- PDF/DOCX export
- Custom agent configuration via YAML
- Result caching and history
- Advanced filtering options
- Custom prompt templates
- Multi-language support
- API endpoint for programmatic access
- Result persistence and database storage
π Metrics & Analytics
The system tracks:
- Execution time per agent
- Model usage statistics
- Search queries performed
- Success/failure rates
- Research coverage metrics
All metrics are displayed in the Summary and Dashboard tabs.
Made with β€οΈ for intelligent research and decision-making
For questions or suggestions, please open an issue or contact the development team.