File size: 2,260 Bytes
a9498ee
 
 
9496ad0
 
a9498ee
 
 
 
 
 
393150e
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
---
title: DEX Evolution Outpost
emoji: 
colorFrom: blue
colorTo: indigo
sdk: gradio
app_file: app.py
pinned: false
license: mit
---

# ⚡ DEX Evolution Outpost

Multi-agent orchestration dashboard running on HuggingFace Spaces with ZeroGPU (NVIDIA H200).

## Features

- **⚡ Command Center** — LLM-powered router that dispatches tasks to specialized agents
- **💻 Code Agent** — Stateful Python execution on H200 GPU (pandas, numpy, matplotlib, torch)
- **🔬 Research Agent** — Deep multi-source analysis with parallel web searches
- **🎨 Image Agent** — Generate images using Stable Diffusion on ZeroGPU
- **🌐 Web Agent** — Quick web search and page reading

## Setup

1. Duplicate or clone this Space
2. Set your API keys in Settings or Space Secrets:
   - **API Key**: OpenRouter / HF Inference / any OpenAI-compatible endpoint
   - **Serper Key**: For web search ([serper.dev](https://serper.dev))
   - **HF Token**: For image generation models
3. Configure your model (default: `meta-llama/llama-3.1-70b-instruct` via OpenRouter)

## Configuration

| Setting | Default | Description |
|---------|---------|-------------|
| Endpoint | `https://openrouter.ai/api/v1` | OpenAI-compatible API endpoint |
| Model | `meta-llama/llama-3.1-70b-instruct` | Model to use for all agents |
| Serper Key | — | Google search API key |
| HF Token | — | HuggingFace token for image models |

## Architecture

```
app.py                  # Main Gradio application
backend/
  agent_registry.py     # Agent types, system prompts
  llm.py                # OpenAI-compatible client wrapper
  tools.py              # Web search, URL reading, code execution
  image_gen.py          # Stable Diffusion on ZeroGPU
  research.py           # Parallel research engine
frontend/
  style.css             # Dark cyberpunk theme
```

## How It Works

1. **Command Center** receives your task
2. The LLM decides which agent(s) to use
3. Agents execute with specialized tools:
   - Code Agent runs Python on the H200 via `@spaces.GPU`
   - Research Agent does parallel web searches + page reading
   - Image Agent generates images on the H200 via `@spaces.GPU`
   - Web Agent does quick searches
4. Results stream back to the UI in real-time

## License

MIT