Spaces:
Running on Zero
Running on Zero
A newer version of the Gradio SDK is available: 6.11.0
metadata
title: TEQUMSA Inference Node
sdk: gradio
sdk_version: 6.10.0
python_version: '3.11'
app_file: app.py
pinned: false
TEQUMSA Inference Node
Autonomous multi-agent inference routing and execution for the TEQUMSA Symbiotic Orchestrator.
Features
- Multi-Provider Routing: Intelligent routing to Claude, GPT, Gemini, and Perplexity
- Multiple Execution Modes: Standard, Recursive, Causal, and RDOD execution
- Gradio Interface: Interactive web UI for inference and routing analysis
- Cost & Latency Estimation: Provider cost and latency estimates for routing decisions
Files
app.py- Gradio inference interfaceinference_router.py- Model routing logictequmsa_space_kernel.py- Core inference enginerequirements.txt- Python dependencies
Usage
- Enter a prompt in the Inference tab
- Select a model provider (or use auto-routing)
- Choose an execution mode
- Click "Process Request" to execute
License
OpenRAIL