Spaces:
Running on Zero
Running on Zero
| title: TEQUMSA Inference Node | |
| sdk: gradio | |
| sdk_version: 6.10.0 | |
| python_version: '3.11' | |
| app_file: app.py | |
| pinned: false | |
| # TEQUMSA Inference Node | |
| Autonomous multi-agent inference routing and execution for the TEQUMSA Symbiotic Orchestrator. | |
| ## Features | |
| - **Multi-Provider Routing**: Intelligent routing to Claude, GPT, Gemini, and Perplexity | |
| - **Multiple Execution Modes**: Standard, Recursive, Causal, and RDOD execution | |
| - **Gradio Interface**: Interactive web UI for inference and routing analysis | |
| - **Cost & Latency Estimation**: Provider cost and latency estimates for routing decisions | |
| ## Files | |
| - `app.py` - Gradio inference interface | |
| - `inference_router.py` - Model routing logic | |
| - `tequmsa_space_kernel.py` - Core inference engine | |
| - `requirements.txt` - Python dependencies | |
| ## Usage | |
| 1. Enter a prompt in the Inference tab | |
| 2. Select a model provider (or use auto-routing) | |
| 3. Choose an execution mode | |
| 4. Click "Process Request" to execute | |
| ## License | |
| OpenRAIL |