Spaces:
Running
on
CPU Upgrade
Running
on
CPU Upgrade
A newer version of the Gradio SDK is available:
6.5.1
Setup Guide
This guide covers everything you need to set up and run the Fantasy Draft Multi-Agent Demo.
Quick Start
Clone the repository
git clone <your-repo-url> cd fantasy-draft-agentSet up Python environment (Python 3.8+ required)
python -m venv venv source venv/bin/activate # On Windows: venv\Scripts\activate pip install -r requirements.txtSet up API key
export OPENAI_API_KEY='your-key-here' # Or create a .env file with: OPENAI_API_KEY=your-key-hereRun the app
python apps/app.py
Detailed Setup
Environment Setup
The app requires Python 3.8+ and the following main dependencies:
gradio>=4.0.0- Web interfaceany-agent- Multi-agent frameworkopenai- LLM integrationpython-dotenv- Environment managementnest-asyncio- Async support for Gradio
Virtual Environment (Recommended)
Using a virtual environment keeps dependencies isolated:
# Create virtual environment
python -m venv venv
# Activate it
# Linux/Mac:
source venv/bin/activate
# Windows:
venv\Scripts\activate
# Install dependencies
pip install -r requirements.txt
# Deactivate when done
deactivate
Configuration
API Keys: The app requires an OpenAI API key. Set it via:
- Environment variable:
export OPENAI_API_KEY='sk-...' .envfile: Create.envin project root withOPENAI_API_KEY=sk-...
- Environment variable:
Port Configuration:
- Default web interface: Port 7860
- A2A mode uses dynamic ports (5000-9000 range)
- Modify in
apps/app.pyif needed
Running Different Modes
Basic Mode (Recommended for Development)
python apps/app.py
- Single process, fast execution
- Good for testing and development
- Supports multiple users
A2A Mode (Distributed Agents)
- Automatically enabled via the UI toggle
- Each agent runs on its own HTTP server
- Dynamic port allocation for multi-user support
- Production-ready architecture
Deployment Options
Local Deployment
- Just run the app as shown above
- Access at
http://localhost:7860 - Share link provided by Gradio
Hugging Face Spaces
- Create a new Space on Hugging Face
- Upload all files maintaining structure
- Set the OpenAI API key as a Space secret
- The app will auto-detect and run
Server Deployment
- Ensure Python 3.8+ is installed
- Clone repository to server
- Set up systemd service or process manager
- Configure reverse proxy (nginx/Apache) if needed
- Set environment variables securely
Troubleshooting
Common Issues
"OPENAI_API_KEY not found"
- Ensure the API key is set correctly
- Check
.envfile location (project root) - Verify key starts with
sk-
Port already in use
- Change port in
apps/app.py:demo.launch(server_port=7861) - Or kill the process using the port
- Change port in
Module not found errors
- Ensure virtual environment is activated
- Run
pip install -r requirements.txtagain - Check Python version (3.8+ required)
A2A mode fails to start
- Check firewall settings for ports 5000-9000
- Ensure no other services using these ports
- Try Basic Multiagent mode as fallback
Performance Tips
- Use Basic Multiagent mode for faster response times
- Adjust
TYPING_DELAY_SECONDSinconstants.pyfor faster demos - Run on a machine with good CPU for multiple A2A agents
- Consider GPU instance if using larger models
Next Steps
- Read TECHNICAL_DOCUMENTATION.md for architecture details
- See FEATURES_AND_ENHANCEMENTS.md for all features
- Check the main README.md for project overview