Spaces:
Sleeping
Sleeping
| title: Visualisable AI Backend | |
| emoji: 🧠 | |
| colorFrom: blue | |
| colorTo: green | |
| sdk: docker | |
| pinned: false | |
| short_description: LLM code generation with real-time trace visualization | |
| # Visualisable.ai Backend Service | |
| This is the backend service for Visualisable.ai, providing: | |
| - Real-time model inference with trace extraction | |
| - WebSocket streaming for live visualization | |
| - REST API for model information and generation | |
| ## API Endpoints | |
| - `GET /` - Health check | |
| - `GET /health` - Detailed health status | |
| - `GET /model/info` - Model architecture details | |
| - `POST /generate` - Generate text with traces | |
| - `WebSocket /ws` - Real-time trace streaming | |
| ## Configuration | |
| Set the following secrets in your Space settings: | |
| - `API_KEY` (optional) - API key for authentication | |
| ## Frontend | |
| The frontend is deployed separately on Vercel. Connect it by setting the backend URL in your frontend environment variables. |