metadata
title: Synapse-Base Inference API
emoji: π§
colorFrom: blue
colorTo: purple
sdk: docker
pinned: true
license: cc-by-nc-4.0
π§ Synapse-Base Inference API
High-performance chess move prediction API powered by Synapse-Base v3.0.
π― Features
- Deep Search Algorithm: Advanced alpha-beta pruning with move ordering
- CPU Optimized: Runs efficiently on 2 vCPU + 16GB RAM
- REST API: Simple POST endpoint for move generation
- Model: 38.1M parameter hybrid CNN-Transformer
π‘ API Endpoint
POST /get-move
Request:
{
"fen": "rnbqkbnr/pppppppp/8/8/8/8/PPPPPPPP/RNBQKBNR w KQkq - 0 1",
"depth": 3,
"time_limit": 5000
}
Response:
{
"best_move": "e2e4",
"evaluation": 0.25,
"depth_searched": 3,
"nodes_evaluated": 15234,
"time_taken": 1247
}
π§ Parameters
- fen (required): Current board position in FEN notation
- depth (optional): Search depth (1-5, default: 3)
- time_limit (optional): Max time in milliseconds (default: 5000)
π» Local Testing
docker build -t synapse-inference .
docker run -p 7860:7860 synapse-inference
π Performance
- Average Response Time: 1-3 seconds per move
- Memory Usage: ~4GB RAM
- Concurrent Requests: Up to 4 simultaneous
β οΈ Rate Limits
Free tier: 100 requests/hour per IP
Built with β€οΈ by GambitFlow