--- title: "🦅 Multeclaw" emoji: 🦅 colorFrom: indigo colorTo: purple sdk: gradio sdk_version: "6.13.0" app_file: app.py pinned: true license: mit short_description: Multi-model AI agent with GPT, Claude, Llama, Groq tags: - agent - multi-model - chat - tools - openai - anthropic - llama - mistral --- # 🦅 Multeclaw — Multi-Model AI Agent System A complete AI agent framework connecting **GPT**, **Claude**, **Llama**, **Mistral**, **Qwen**, **Groq**, and **Ollama** through a single unified interface with built-in tool execution, multi-step planning, and safety layers. ## 🏗️ Architecture | Component | Role | |-----------|------| | **Router** | Classifies tasks → direct, tool-assisted, multi-step, code, analysis | | **Planner** | Decomposes complex objectives into executable step sequences | | **LLM Client** | Unified streaming interface across 5 providers | | **Executor** | Runs LLM completions and tool calls | | **Memory** | Tracks conversation history, tool results, and session context | | **Tool System** | Sandboxed calculator, code execution, file I/O | | **Safety Layer** | Input content filtering | | **Repair Loop** | Retries failed tool calls with error context | ## 🤖 Supported Providers & Models | Provider | Models | Key Feature | |----------|--------|-------------| | **OpenAI** | GPT-4o, GPT-4o Mini, GPT-4 Turbo | Strong reasoning, tool calling | | **Anthropic** | Claude 4 Opus, Claude 4 Sonnet, Claude 3.5 Haiku | Deep analysis, 200K context | | **HuggingFace** | Llama 3 8B, Qwen 2.5 72B, Mistral 7B | Open models via inference API | | **Groq** | Llama 3 70B, Mixtral 8x7B | Ultra-fast inference | | **Ollama** | Any local model | Privacy, offline use | ## 🔄 Agent Loops 1. **Reasoning Loop** — LLM processes the request with full conversation context 2. **Tool Loop** — LLM decides to call tools, results are fed back iteratively 3. **Planning Loop** — Complex tasks auto-decomposed into steps, executed sequentially 4. **Repair Loop** — Failed operations retried with error context ## 🛠️ Built-in Tools - **Calculator** — Safe math evaluation with Python math syntax - **Code Executor** — Sandboxed Python execution with timeout - **File Reader/Writer** — Safe file I/O operations - **Web Search** — (Placeholder for integration) ## 🚀 Quick Start 1. Go to the **🔑 Settings** tab 2. Enter your API key(s) for the provider(s) you want to use 3. Go to the **💬 Chat** tab 4. Select a model from the dropdown 5. Start chatting! ## 🎭 Personas - **Multeclaw Agent** — General-purpose AI agent - **Code Expert** — Senior software engineer - **Research Analyst** — Thorough research analysis - **Creative Writer** — Vivid, engaging writing - **Data Scientist** — Rigorous data analysis - **Custom** — Your own system prompt ## 📁 Project Structure ``` multeclaw/ ├── __init__.py # Package exports ├── config.py # Model registry, prompts, tools, agent config ├── llm_client.py # Unified multi-provider LLM client ├── agent.py # Agent core: router, planner, executor, memory, safety └── ui.py # Gradio UI with 6 tabs app.py # Entry point requirements.txt # Dependencies ``` ## 🛡️ Safety - Input content filtering blocks harmful requests - Code execution sandboxed with 30s timeout - API keys stored in-memory only, never persisted - File operations restricted to safe paths - Math evaluation uses allowlisted functions only