RobotPai / API_SERVER_README.md
atr0p05's picture
Upload 291 files
8a682b5 verified

A newer version of the Gradio SDK is available: 6.6.0

Upgrade

Multi-Agent Platform API Server

A comprehensive FastAPI-based REST API server for managing multi-agent systems with real-time WebSocket support, monitoring, and scalable architecture.

πŸš€ Features

  • RESTful API: Complete CRUD operations for agents, tasks, resources, and conflicts
  • Real-time WebSocket: Live monitoring and event streaming
  • Authentication & Security: JWT-based authentication and API key support
  • Monitoring & Metrics: Prometheus metrics and Grafana dashboards
  • Scalable Architecture: Docker, Kubernetes, and horizontal scaling support
  • Database Integration: PostgreSQL and Supabase support
  • Performance Testing: Built-in load testing and benchmarking tools

πŸ“‹ Table of Contents

  1. Quick Start
  2. API Endpoints
  3. WebSocket API
  4. Authentication
  5. Monitoring
  6. Deployment
  7. Development
  8. Troubleshooting

πŸƒβ€β™‚οΈ Quick Start

Prerequisites

  • Python 3.11+
  • Redis 7.0+
  • Docker (optional)

Installation

  1. Clone the repository
git clone <repository-url>
cd AI-Agent
  1. Install dependencies
pip install -r requirements.txt
  1. Set up environment variables
cp .env.example .env
# Edit .env with your configuration
  1. Start Redis
# Using Docker
docker run -d -p 6379:6379 redis:7-alpine

# Or using system package manager
sudo systemctl start redis
  1. Run the API server
# Development mode
uvicorn src.api_server:app --host 0.0.0.0 --port 8000 --reload

# Production mode
uvicorn src.api_server:app --host 0.0.0.0 --port 8000 --workers 4
  1. Test the API
# Health check
curl http://localhost:8000/api/v1/health

# API documentation
open http://localhost:8000/docs

πŸ”Œ API Endpoints

Authentication

All API endpoints require authentication using Bearer tokens:

curl -H "Authorization: Bearer your-token-here" \
     http://localhost:8000/api/v1/agents

Agent Management

Register Agent

curl -X POST http://localhost:8000/api/v1/agents \
  -H "Authorization: Bearer your-token" \
  -H "Content-Type: application/json" \
  -d '{
    "name": "data_processor_agent",
    "capabilities": ["data_processing", "ml_inference"],
    "resources": {"cpu": 2, "memory": "1Gi"},
    "metadata": {"version": "1.0", "team": "ai"}
  }'

List Agents

curl -H "Authorization: Bearer your-token" \
     http://localhost:8000/api/v1/agents

# With filters
curl -H "Authorization: Bearer your-token" \
     "http://localhost:8000/api/v1/agents?status_filter=active&capability_filter=data_processing"

Get Agent Details

curl -H "Authorization: Bearer your-token" \
     http://localhost:8000/api/v1/agents/{agent_id}

Deregister Agent

curl -X DELETE -H "Authorization: Bearer your-token" \
     http://localhost:8000/api/v1/agents/{agent_id}

Task Management

Submit Task

curl -X POST http://localhost:8000/api/v1/tasks \
  -H "Authorization: Bearer your-token" \
  -H "Content-Type: application/json" \
  -d '{
    "title": "Process Customer Data",
    "description": "Analyze customer behavior patterns",
    "priority": 8,
    "required_capabilities": ["data_processing"],
    "resources": {"cpu": 1, "memory": "512Mi"},
    "deadline": "2024-01-15T23:59:59Z",
    "metadata": {"customer_id": "12345"}
  }'

List Tasks

curl -H "Authorization: Bearer your-token" \
     http://localhost:8000/api/v1/tasks

# With filters
curl -H "Authorization: Bearer your-token" \
     "http://localhost:8000/api/v1/tasks?status_filter=pending&priority_filter=8"

Get Task Details

curl -H "Authorization: Bearer your-token" \
     http://localhost:8000/api/v1/tasks/{task_id}

Resource Management

Register Resource

curl -X POST http://localhost:8000/api/v1/resources \
  -H "Authorization: Bearer your-token" \
  -H "Content-Type: application/json" \
  -d '{
    "name": "gpu_cluster_01",
    "type": "gpu",
    "capacity": {"gpus": 4, "memory": "32Gi"},
    "metadata": {"model": "RTX 4090", "location": "rack-1"}
  }'

List Resources

curl -H "Authorization: Bearer your-token" \
     http://localhost:8000/api/v1/resources

# Filter by type
curl -H "Authorization: Bearer your-token" \
     "http://localhost:8000/api/v1/resources?type_filter=gpu"

Conflict Management

Report Conflict

curl -X POST http://localhost:8000/api/v1/conflicts \
  -H "Authorization: Bearer your-token" \
  -H "Content-Type: application/json" \
  -d '{
    "agent_id": "agent-123",
    "task_id": "task-456",
    "conflict_type": "resource_contention",
    "description": "Multiple agents requesting same GPU resource",
    "severity": "high",
    "metadata": {"resource_id": "gpu-001"}
  }'

List Conflicts

curl -H "Authorization: Bearer your-token" \
     http://localhost:8000/api/v1/conflicts

# Filter by severity
curl -H "Authorization: Bearer your-token" \
     "http://localhost:8000/api/v1/conflicts?severity_filter=high"

Workflow Management

Create Workflow

curl -X POST http://localhost:8000/api/v1/workflows \
  -H "Authorization: Bearer your-token" \
  -H "Content-Type: application/json" \
  -d '{
    "name": "Data Processing Pipeline",
    "description": "End-to-end data processing workflow",
    "steps": [
      {
        "name": "data_ingestion",
        "type": "task",
        "parameters": {"source": "database"}
      },
      {
        "name": "data_processing",
        "type": "task",
        "parameters": {"algorithm": "random_forest"}
      },
      {
        "name": "result_export",
        "type": "task",
        "parameters": {"format": "json"}
      }
    ],
    "metadata": {"version": "1.0"}
  }'

Execute Workflow

curl -X POST http://localhost:8000/api/v1/workflows/{workflow_id}/execute \
  -H "Authorization: Bearer your-token" \
  -H "Content-Type: application/json" \
  -d '{
    "parameters": {"input_file": "data.csv"},
    "metadata": {"execution_id": "exec-123"}
  }'

Performance Monitoring

Get Performance Metrics

curl -H "Authorization: Bearer your-token" \
     http://localhost:8000/api/v1/metrics/performance

# Filter by agent
curl -H "Authorization: Bearer your-token" \
     "http://localhost:8000/api/v1/metrics/performance?agent_id=agent-123"

Dashboard

Get Dashboard Summary

curl -H "Authorization: Bearer your-token" \
     http://localhost:8000/api/v1/dashboard/summary

Get Dashboard Activity

curl -H "Authorization: Bearer your-token" \
     http://localhost:8000/api/v1/dashboard/activity?limit=50

πŸ”Œ WebSocket API

Connection

Connect to the WebSocket endpoint:

const ws = new WebSocket('ws://localhost:8000/api/v1/ws/client-123');

Message Format

All WebSocket messages use JSON format:

{
  "type": "message_type",
  "data": {},
  "timestamp": "2024-01-01T12:00:00Z"
}

Subscription

Subscribe to specific event types:

ws.send(JSON.stringify({
  "type": "subscribe",
  "subscriptions": ["agent_activity", "task_activity", "conflict_alerts"]
}));

Event Types

  • agent_activity: Agent registration, status changes, heartbeats
  • task_activity: Task submission, assignment, completion
  • conflict_alerts: Conflict reports and resolutions
  • system_metrics: Real-time system performance metrics

Example WebSocket Client

import asyncio
import aiohttp
import json

async def websocket_client():
    async with aiohttp.ClientSession() as session:
        async with session.ws_connect('ws://localhost:8000/api/v1/ws/test-client') as ws:
            # Subscribe to events
            await ws.send_json({
                "type": "subscribe",
                "subscriptions": ["agent_activity", "task_activity"]
            })
            
            # Listen for messages
            async for msg in ws:
                if msg.type == aiohttp.WSMsgType.TEXT:
                    data = json.loads(msg.data)
                    print(f"Received: {data}")
                elif msg.type == aiohttp.WSMsgType.ERROR:
                    print(f"WebSocket error: {ws.exception()}")

asyncio.run(websocket_client())

πŸ” Authentication

JWT Authentication

The API supports JWT-based authentication:

import jwt
from datetime import datetime, timedelta

# Create token
payload = {
    "sub": "user123",
    "exp": datetime.utcnow() + timedelta(hours=24)
}
token = jwt.encode(payload, "your-secret-key", algorithm="HS256")

# Use token
headers = {"Authorization": f"Bearer {token}"}

API Key Authentication

For simpler integrations, use API key authentication:

curl -H "X-API-Key: your-api-key" \
     http://localhost:8000/api/v1/agents

πŸ“Š Monitoring

Prometheus Metrics

The API server exposes metrics at /metrics:

curl http://localhost:8000/metrics

Key metrics:

  • http_requests_total: Total HTTP requests
  • http_request_duration_seconds: Request duration histogram
  • agent_platform_active_agents_total: Number of active agents
  • agent_platform_active_tasks_total: Number of active tasks
  • websocket_connections_total: Number of WebSocket connections

Grafana Dashboard

  1. Access Grafana at http://localhost:3000
  2. Login with admin/admin
  3. Import dashboard from monitoring/grafana/dashboards/agent-platform-dashboard.json

Health Check

curl http://localhost:8000/api/v1/health

Response:

{
  "status": "healthy",
  "components": {
    "platform": "healthy",
    "redis": "healthy",
    "database": "healthy"
  },
  "metrics": {
    "active_agents": 5,
    "active_tasks": 12,
    "active_connections": 3
  },
  "timestamp": "2024-01-01T12:00:00Z"
}

πŸš€ Deployment

Docker Deployment

# Build image
docker build -t agent-platform:latest .

# Run with Docker Compose
docker-compose up -d

Kubernetes Deployment

# Create namespace
kubectl create namespace agent-platform

# Apply manifests
kubectl apply -f k8s/

# Check status
kubectl get pods -n agent-platform

Production Configuration

See DEPLOYMENT_GUIDE.md for detailed production setup instructions.

πŸ› οΈ Development

Project Structure

src/
β”œβ”€β”€ api_server.py          # Main FastAPI application
β”œβ”€β”€ unified_architecture/  # Core platform components
β”œβ”€β”€ infrastructure/        # Database, monitoring, etc.
└── tools/                # Utility tools and helpers

monitoring/
β”œβ”€β”€ prometheus.yml        # Prometheus configuration
└── grafana/             # Grafana dashboards and datasources

k8s/                     # Kubernetes manifests
scripts/                 # Utility scripts

Running Tests

# Unit tests
pytest tests/

# Performance tests
python scripts/performance_test.py --agents 100 --tasks 200

# Load testing
python scripts/performance_test.py \
  --agents 500 \
  --tasks 1000 \
  --websocket-connections 50

Code Quality

# Format code
black src/ tests/

# Lint code
flake8 src/ tests/

# Type checking
mypy src/

Adding New Endpoints

  1. Define Pydantic models in api_server.py
  2. Add route handlers with proper authentication
  3. Update WebSocket broadcasts for real-time updates
  4. Add metrics for monitoring
  5. Write tests for the new functionality

Example:

@router.post("/custom-endpoint", response_model=CustomResponse)
async def custom_endpoint(
    request: CustomRequest,
    platform: MultiAgentPlatform = Depends(get_platform),
    token: str = Depends(verify_token)
):
    # Implementation
    result = platform.custom_operation(request)
    
    # Broadcast update
    await manager.broadcast(
        WebSocketMessage(
            type="custom_event",
            data={"result": result}
        ).json()
    )
    
    return CustomResponse(**result)

πŸ”§ Troubleshooting

Common Issues

API Server Won't Start

# Check logs
docker-compose logs api-server

# Verify dependencies
pip list | grep fastapi

# Check environment variables
echo $DATABASE_URL

Redis Connection Issues

# Test Redis connection
redis-cli ping

# Check Redis logs
docker-compose logs redis

WebSocket Connection Issues

# Test WebSocket connection
wscat -c ws://localhost:8000/api/v1/ws/test

# Check WebSocket logs
docker-compose logs api-server | grep websocket

Performance Issues

High Response Times

# Check resource usage
docker stats

# Monitor slow queries
docker-compose exec postgres psql -c "SELECT * FROM pg_stat_activity WHERE state = 'active';"

# Check API metrics
curl http://localhost:8000/metrics | grep http_request_duration

Memory Issues

# Check memory usage
free -h
docker stats --no-stream

# Analyze memory leaks
python scripts/memory_profiler.py

Debug Mode

Enable debug mode for detailed logging:

export LOG_LEVEL=DEBUG
export DEBUG=true
uvicorn src.api_server:app --reload --log-level debug

πŸ“š Additional Resources

🀝 Contributing

  1. Fork the repository
  2. Create a feature branch
  3. Make your changes
  4. Add tests
  5. Submit a pull request

πŸ“„ License

This project is licensed under the MIT License - see the LICENSE file for details.

πŸ†˜ Support

For support and questions:

  1. Check the documentation
  2. Review the troubleshooting guide
  3. Open an issue on GitHub
  4. Contact the development team

Happy coding! πŸš€