multeclaw / README.md
precison9's picture
Initial commit: Multeclaw multi-model AI agent system
a0583b5 verified
---
title: "πŸ¦… Multeclaw"
emoji: πŸ¦…
colorFrom: indigo
colorTo: purple
sdk: gradio
sdk_version: "6.13.0"
app_file: app.py
pinned: true
license: mit
short_description: Multi-model AI agent with GPT, Claude, Llama, Groq
tags:
- agent
- multi-model
- chat
- tools
- openai
- anthropic
- llama
- mistral
---
# πŸ¦… Multeclaw β€” Multi-Model AI Agent System
A complete AI agent framework connecting **GPT**, **Claude**, **Llama**, **Mistral**, **Qwen**, **Groq**, and **Ollama** through a single unified interface with built-in tool execution, multi-step planning, and safety layers.
## πŸ—οΈ Architecture
| Component | Role |
|-----------|------|
| **Router** | Classifies tasks β†’ direct, tool-assisted, multi-step, code, analysis |
| **Planner** | Decomposes complex objectives into executable step sequences |
| **LLM Client** | Unified streaming interface across 5 providers |
| **Executor** | Runs LLM completions and tool calls |
| **Memory** | Tracks conversation history, tool results, and session context |
| **Tool System** | Sandboxed calculator, code execution, file I/O |
| **Safety Layer** | Input content filtering |
| **Repair Loop** | Retries failed tool calls with error context |
## πŸ€– Supported Providers & Models
| Provider | Models | Key Feature |
|----------|--------|-------------|
| **OpenAI** | GPT-4o, GPT-4o Mini, GPT-4 Turbo | Strong reasoning, tool calling |
| **Anthropic** | Claude 4 Opus, Claude 4 Sonnet, Claude 3.5 Haiku | Deep analysis, 200K context |
| **HuggingFace** | Llama 3 8B, Qwen 2.5 72B, Mistral 7B | Open models via inference API |
| **Groq** | Llama 3 70B, Mixtral 8x7B | Ultra-fast inference |
| **Ollama** | Any local model | Privacy, offline use |
## πŸ”„ Agent Loops
1. **Reasoning Loop** β€” LLM processes the request with full conversation context
2. **Tool Loop** β€” LLM decides to call tools, results are fed back iteratively
3. **Planning Loop** β€” Complex tasks auto-decomposed into steps, executed sequentially
4. **Repair Loop** β€” Failed operations retried with error context
## πŸ› οΈ Built-in Tools
- **Calculator** β€” Safe math evaluation with Python math syntax
- **Code Executor** β€” Sandboxed Python execution with timeout
- **File Reader/Writer** β€” Safe file I/O operations
- **Web Search** β€” (Placeholder for integration)
## πŸš€ Quick Start
1. Go to the **πŸ”‘ Settings** tab
2. Enter your API key(s) for the provider(s) you want to use
3. Go to the **πŸ’¬ Chat** tab
4. Select a model from the dropdown
5. Start chatting!
## 🎭 Personas
- **Multeclaw Agent** β€” General-purpose AI agent
- **Code Expert** β€” Senior software engineer
- **Research Analyst** β€” Thorough research analysis
- **Creative Writer** β€” Vivid, engaging writing
- **Data Scientist** β€” Rigorous data analysis
- **Custom** β€” Your own system prompt
## πŸ“ Project Structure
```
multeclaw/
β”œβ”€β”€ __init__.py # Package exports
β”œβ”€β”€ config.py # Model registry, prompts, tools, agent config
β”œβ”€β”€ llm_client.py # Unified multi-provider LLM client
β”œβ”€β”€ agent.py # Agent core: router, planner, executor, memory, safety
└── ui.py # Gradio UI with 6 tabs
app.py # Entry point
requirements.txt # Dependencies
```
## πŸ›‘οΈ Safety
- Input content filtering blocks harmful requests
- Code execution sandboxed with 30s timeout
- API keys stored in-memory only, never persisted
- File operations restricted to safe paths
- Math evaluation uses allowlisted functions only