| --- |
| title: "π¦
Multeclaw" |
| emoji: π¦
|
| colorFrom: indigo |
| colorTo: purple |
| sdk: gradio |
| sdk_version: "6.13.0" |
| app_file: app.py |
| pinned: true |
| license: mit |
| short_description: Multi-model AI agent with GPT, Claude, Llama, Groq |
| tags: |
| - agent |
| - multi-model |
| - chat |
| - tools |
| - openai |
| - anthropic |
| - llama |
| - mistral |
| --- |
| |
| # π¦
Multeclaw β Multi-Model AI Agent System |
|
|
| A complete AI agent framework connecting **GPT**, **Claude**, **Llama**, **Mistral**, **Qwen**, **Groq**, and **Ollama** through a single unified interface with built-in tool execution, multi-step planning, and safety layers. |
|
|
| ## ποΈ Architecture |
|
|
| | Component | Role | |
| |-----------|------| |
| | **Router** | Classifies tasks β direct, tool-assisted, multi-step, code, analysis | |
| | **Planner** | Decomposes complex objectives into executable step sequences | |
| | **LLM Client** | Unified streaming interface across 5 providers | |
| | **Executor** | Runs LLM completions and tool calls | |
| | **Memory** | Tracks conversation history, tool results, and session context | |
| | **Tool System** | Sandboxed calculator, code execution, file I/O | |
| | **Safety Layer** | Input content filtering | |
| | **Repair Loop** | Retries failed tool calls with error context | |
|
|
| ## π€ Supported Providers & Models |
|
|
| | Provider | Models | Key Feature | |
| |----------|--------|-------------| |
| | **OpenAI** | GPT-4o, GPT-4o Mini, GPT-4 Turbo | Strong reasoning, tool calling | |
| | **Anthropic** | Claude 4 Opus, Claude 4 Sonnet, Claude 3.5 Haiku | Deep analysis, 200K context | |
| | **HuggingFace** | Llama 3 8B, Qwen 2.5 72B, Mistral 7B | Open models via inference API | |
| | **Groq** | Llama 3 70B, Mixtral 8x7B | Ultra-fast inference | |
| | **Ollama** | Any local model | Privacy, offline use | |
|
|
| ## π Agent Loops |
|
|
| 1. **Reasoning Loop** β LLM processes the request with full conversation context |
| 2. **Tool Loop** β LLM decides to call tools, results are fed back iteratively |
| 3. **Planning Loop** β Complex tasks auto-decomposed into steps, executed sequentially |
| 4. **Repair Loop** β Failed operations retried with error context |
|
|
| ## π οΈ Built-in Tools |
|
|
| - **Calculator** β Safe math evaluation with Python math syntax |
| - **Code Executor** β Sandboxed Python execution with timeout |
| - **File Reader/Writer** β Safe file I/O operations |
| - **Web Search** β (Placeholder for integration) |
|
|
| ## π Quick Start |
|
|
| 1. Go to the **π Settings** tab |
| 2. Enter your API key(s) for the provider(s) you want to use |
| 3. Go to the **π¬ Chat** tab |
| 4. Select a model from the dropdown |
| 5. Start chatting! |
|
|
| ## π Personas |
|
|
| - **Multeclaw Agent** β General-purpose AI agent |
| - **Code Expert** β Senior software engineer |
| - **Research Analyst** β Thorough research analysis |
| - **Creative Writer** β Vivid, engaging writing |
| - **Data Scientist** β Rigorous data analysis |
| - **Custom** β Your own system prompt |
|
|
| ## π Project Structure |
|
|
| ``` |
| multeclaw/ |
| βββ __init__.py # Package exports |
| βββ config.py # Model registry, prompts, tools, agent config |
| βββ llm_client.py # Unified multi-provider LLM client |
| βββ agent.py # Agent core: router, planner, executor, memory, safety |
| βββ ui.py # Gradio UI with 6 tabs |
| app.py # Entry point |
| requirements.txt # Dependencies |
| ``` |
|
|
| ## π‘οΈ Safety |
|
|
| - Input content filtering blocks harmful requests |
| - Code execution sandboxed with 30s timeout |
| - API keys stored in-memory only, never persisted |
| - File operations restricted to safe paths |
| - Math evaluation uses allowlisted functions only |
|
|