Spaces:
Sleeping
Sleeping
metadata
title: Visualisable AI Backend
emoji: 🧠
colorFrom: blue
colorTo: green
sdk: docker
pinned: false
short_description: LLM code generation with real-time trace visualization
Visualisable.ai Backend Service
This is the backend service for Visualisable.ai, providing:
- Real-time model inference with trace extraction
- WebSocket streaming for live visualization
- REST API for model information and generation
API Endpoints
GET /- Health checkGET /health- Detailed health statusGET /model/info- Model architecture detailsPOST /generate- Generate text with tracesWebSocket /ws- Real-time trace streaming
Configuration
Set the following secrets in your Space settings:
API_KEY(optional) - API key for authentication
Frontend
The frontend is deployed separately on Vercel. Connect it by setting the backend URL in your frontend environment variables.