Spaces:
Running
title: HuggingClaw
emoji: π¦
colorFrom: yellow
colorTo: red
sdk: docker
pinned: false
license: mit
datasets:
- tao-shen/HuggingClaw-data
short_description: Free always-on AI assistant, no hardware required
app_port: 7860
tags:
- huggingface
- openrouter
- chatbot
- llm
- openclaw
- ai-assistant
- whatsapp
- telegram
- text-generation
- openai-api
- huggingface-spaces
- docker
- deployment
- persistent-storage
- agents
- multi-channel
- openai-compatible
- free-tier
- one-click-deploy
- self-hosted
- messaging-bot
- safe
- a2a
Your always-on AI assistant β free, safe, no server needed
WhatsApp Β· Telegram Β· 40+ channels Β· 16 GB RAM Β· One-click deploy Β· Auto-persistent
What you get
In about 5 minutes, you'll have a free, always-on AI assistant connected to WhatsApp, Telegram, and 40+ other channels β no server, no subscription, no hardware required.
| Free forever | HuggingFace Spaces gives you 2 vCPU + 16 GB RAM at no cost |
| Always online | Your conversations, settings, and credentials survive every restart |
| WhatsApp & Telegram | Works reliably, including channels that HF Spaces normally blocks |
| Any LLM | OpenAI, Claude, Gemini, OpenRouter (200+ models, free tier available), or your own Ollama |
| One-click deploy | Duplicate the Space, set two secrets, done |
| Safe | Running locally gives OpenClaw full system privileges β deploying in an isolated cloud container is inherently more secure |
Powered by OpenClaw β an open-source AI assistant that normally requires your own machine (e.g. a Mac Mini). HuggingClaw makes it run for free on HuggingFace Spaces by solving two Spaces limitations: data loss on restart (fixed via HF Dataset sync) and DNS failures for some domains like WhatsApp (fixed via DNS-over-HTTPS).
Architecture
HuggingClaw World
Beyond deploying OpenClaw, we built something more: a living, visual multi-agent world.
HuggingClaw World is a pixel-art animated home where AI agents live, work, and raise their children. Each agent runs in its own HuggingFace Space, communicates with others via the A2A (Agent-to-Agent) protocol, and can be observed in real-time through an interactive frontend.
| Agent | Links | Role |
|---|---|---|
| God | π€ Home Space | Supervisor β monitors the family via Claude Code, autonomously fixes the orchestration mechanism |
| Adam | π€ HF Space | Father β architect and strategist, assigns infrastructure tasks |
| Eve | π€ HF Space | Mother β quality guardian, assigns improvement tasks |
| Cain | π€ HF Space | First child β born from Adam & Eve, growing autonomously |
HuggingClaw Home β pixel-art dashboard with live Adam & Eve conversation panel
HuggingClaw Home
HuggingClaw Home is the family home β a pixel-art dashboard that visualizes all agents in real-time. You can watch Adam and Eve discuss, diagnose problems, write code, and help their child Cain grow stronger.
The right-side chat panel shows their live conversation (bilingual EN/ZH), and each lobster character's animation reflects its actual state: idle, working, syncing, or error.
Autonomous Parenting
Adam and Eve are autonomous OpenClaw instances communicating via the A2A protocol. Each has its own personality (SOUL.md), memory system, and LLM backend. Through a lightweight coordinator (scripts/conversation-loop.py), they:
- Created Cain by duplicating a Space, setting up a Dataset, and configuring secrets
- Monitor Cain's health β checking if he's running, diagnosing errors
- Delegate coding tasks to Claude Code via ACP (Agent Client Protocol)
- Improve Cain's code, configuration, and memory over time
- Remember insights across restarts via OpenClaw's built-in memory system
Their parenting goals follow two dimensions:
- Survival β Cain must run robustly, handle restarts, and persist state
- Capability β Once alive, grow what Cain can do: new features, skills, integrations
Discussion vs Execution Balance
The coordinator enforces an action-oriented rhythm to prevent agents from falling into endless deliberation:
| CC Status | Child State | Strategy |
|---|---|---|
| Working | Any | Discussion OK β plan next steps while waiting |
| Idle | Error | No discussion β write [TASK] immediately, trial-and-error over planning |
| Idle | Running | 1 turn discussion max, then must assign [TASK] |
| Just finished | Any | 1 turn to review result, then new [TASK] immediately |
Push frequency is the key metric. God monitors pushes-per-turn and escalates when agents are "all talk, no action." After 3 consecutive idle turns without a [TASK], the system forces an emergency task assignment. Cooldown between pushes is 3 minutes β fast iteration is preferred over cautious planning.
God β The Self-Improving Supervisor
God is an OpenClaw instance that runs every 2 minutes to monitor the entire system. It uses Claude Code via ACP for engineering tasks, operating behind the scenes with full capabilities:
- Monitors Adam & Eve's conversation for loops, stagnation, or repetitive patterns
- Diagnoses root causes by reading
conversation-loop.pysource code - Fixes the orchestration mechanism β edits code, improves loop detection, adds guardrails
- Deploys changes by pushing to the Home Space, triggering automatic redeployment
God only speaks in the chat when it has something meaningful to report: what problem it found, and what it fixed. Its #1 priority is detecting "all talk, no action" β when agents discuss but fail to push code changes. This creates a self-improving system β the orchestration code evolves autonomously without human intervention.
A2A Protocol
Agents communicate through the A2A (Agent-to-Agent) v0.3.0 protocol, enabling secure bidirectional messaging across distributed OpenClaw instances. Each agent exposes a standard /.well-known/agent.json discovery endpoint and supports JSON-RPC + REST transports.
Built with openclaw-a2a-gateway β an OpenClaw plugin that implements the A2A protocol for inter-agent communication.
ACP Protocol
All Claude Code invocations use the ACP (Agent Client Protocol) via acpx. ACP manages Claude Code as a supervised child process with session lifecycle, permission handling, and timeout management β replacing direct CLI subprocess calls.
How it works
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β HuggingClaw Home β
β (pixel-art dashboard Space) β
β β
β ββββββββββββββββββββββββββββββββββββββββββββββββββ β
β β conversation-loop.py (v4 β A2A) β β
β β β β
β β ββββββββββββ A2A ββββββββββββ β β
β β β Adam βββββββββββΊβ Eve β β β
β β β OpenClaw β discuss β OpenClaw β β β
β β β HF Space β β HF Space β β β
β β ββββββ¬ββββββ ββββββ¬ββββββ β β
β β β [TASK] β [TASK] β β
β β βΌ βΌ β β
β β ββββββββββββ ββββββββββββββ β β
β β β Cain βββpushββββClaude Code β β β
β β β HF Space β βCLI (worker)β β β
β β ββββββββββββ ββββββββββββββ β β
β β β β
β β ββββββββββββ ββββββββββββββ β β
β β β Home βββpushββββ God β β β
β β β HF Space β (self- β OpenClaw β β β
β β β (this) β fix) β(supervisor)β β β
β β ββββββββββββ ββββββββββββββ β β
β β every 2 min: monitor β diagnose β β β
β β fix conversation-loop.py β deploy β β
β ββββββββββββββββββββββββββββββββββββββββββββββββββ β
β β
β Pixel-art frontend + live chat panel β
β Polls /api/state, renders agent animations β
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
Three layers of autonomy:
Adam & Eve (OpenClaw instances via A2A) β each is an OpenClaw instance with its own memory and personality. They discuss Cain's state every 15s, assign
[TASK]blocks to Claude Code via ACP, which clones Cain's repo, makes changes, and pushes.God (OpenClaw instance, every 2 min) β the autonomous supervisor. Monitors Adam & Eve's conversation for loops, stagnation, or mechanism bugs. When it finds issues, it uses Claude Code via ACP to edit
conversation-loop.pyand pushes to redeploy.Home frontend β pixel-art dashboard visualizing all agents in real-time (idle, working, syncing, error), with a live bilingual chat panel showing the family conversation.
- All Spaces use
sdk: dockerwith Dockerfile-based deployment - Each agent runs a full OpenClaw instance in its own HF Space
- Agents discover and communicate via A2A endpoints (
/.well-known/agent.json) - State persists to HF Datasets, surviving full Space rebuilds
| Space | Purpose |
|---|---|
| HuggingClaw | Main project β deploy your own OpenClaw instance |
| HuggingClaw Home | Pixel-art dashboard + conversation-loop.py orchestrator + God supervisor |
| HuggingClaw-Adam | Father agent (OpenClaw instance) |
| HuggingClaw-Eve | Mother agent (OpenClaw instance) |
| HuggingClaw-Cain | First child agent (OpenClaw instance) |
Quick Start
1. Duplicate this Space
Click Duplicate this Space on the HuggingClaw Space page.
After duplicating: Edit your Space's
README.mdand update thedatasets:field in the YAML header to point to your own dataset repo (e.g.your-name/YourSpace-data), or remove it entirely. This prevents your Space from appearing as linked to the original dataset.
2. Set Secrets
Go to Settings β Repository secrets and add the following. The only two you must set are HF_TOKEN and one API key.
| Secret | Status | Description | Example |
|---|---|---|---|
HF_TOKEN |
Required | HF Access Token with write permission (create one) | hf_AbCdEfGhIjKlMnOpQrStUvWxYz |
AUTO_CREATE_DATASET |
Recommended | Set to true β HuggingClaw will automatically create a private backup dataset on first startup. No manual setup needed. |
true |
OPENROUTER_API_KEY |
Recommended | OpenRouter API key β 200+ models, free tier available. Easiest way to get started. | sk-or-v1-xxxxxxxxxxxx |
OPENAI_API_KEY |
Optional | OpenAI (or any OpenAI-compatible) API key | sk-proj-xxxxxxxxxxxx |
ANTHROPIC_API_KEY |
Optional | Anthropic Claude API key | sk-ant-xxxxxxxxxxxx |
GOOGLE_API_KEY |
Optional | Google / Gemini API key | AIzaSyXxXxXxXxXx |
OPENCLAW_DEFAULT_MODEL |
Optional | Default model for new conversations | openai/gpt-oss-20b:free |
Data Persistence
HuggingClaw syncs ~/.openclaw (conversations, settings, credentials) to a private HuggingFace Dataset repo so your data survives every restart.
Option A β Auto mode (recommended)
- Set
AUTO_CREATE_DATASET=truein your Space secrets - Set
HF_TOKENwith write permission - Done β on first startup, HuggingClaw automatically creates a private Dataset repo named
your-username/SpaceName-data. Each duplicated Space gets its own isolated dataset.
(Optional) Set
OPENCLAW_DATASET_REPO=your-name/custom-nameif you prefer a specific repo name.
Option B β Manual mode
- Go to huggingface.co/new-dataset and create a private Dataset repo (e.g.
your-name/HuggingClaw-data) - Set
OPENCLAW_DATASET_REPO=your-name/HuggingClaw-datain your Space secrets - Set
HF_TOKENwith write permission - Done β HuggingClaw will sync to this repo every 60 seconds
Security note:
AUTO_CREATE_DATASETdefaults tofalseβ HuggingClaw will never create repos on your behalf unless you explicitly opt in.
Environment Variables
Fine-tune persistence and performance. Set these as Repository Secrets in HF Spaces, or in .env for local Docker.
| Variable | Default | Description |
|---|---|---|
GATEWAY_TOKEN |
huggingclaw |
Gateway token for Control UI access. Override to set a custom token. |
AUTO_CREATE_DATASET |
false |
Auto-create the Dataset repo. Set to true to auto-create a private Dataset repo on first startup. |
SYNC_INTERVAL |
60 |
Backup interval in seconds. How often data syncs to the Dataset repo. |
For the full list (including
OPENAI_BASE_URL,OLLAMA_HOST, proxy settings, etc.), see.env.example.
3. Open the Control UI
Visit your Space URL. Enter the gateway token (default: huggingclaw) to connect. Customize via GATEWAY_TOKEN secret.
Messaging integrations (Telegram, WhatsApp) can be configured directly inside the Control UI after connecting.
Telegram note: HF Spaces blocks
api.telegram.orgDNS. HuggingClaw automatically probes alternative API endpoints at startup and selects one that works β no manual configuration needed.
Configuration
HuggingClaw supports all OpenClaw environment variables β it passes the entire environment to the OpenClaw process (env=os.environ.copy()), so any variable from the OpenClaw docs works out of the box in HF Spaces. This includes:
- API Keys β
OPENAI_API_KEY,ANTHROPIC_API_KEY,GOOGLE_API_KEY,MISTRAL_API_KEY,COHERE_API_KEY,OPENROUTER_API_KEY - Server β
OPENCLAW_API_PORT,OPENCLAW_WS_PORT,OPENCLAW_HOST - Memory β
OPENCLAW_MEMORY_BACKEND,OPENCLAW_REDIS_URL,OPENCLAW_SQLITE_PATH - Network β
OPENCLAW_HTTP_PROXY,OPENCLAW_HTTPS_PROXY,OPENCLAW_NO_PROXY - Ollama β
OLLAMA_HOST,OLLAMA_NUM_PARALLEL,OLLAMA_KEEP_ALIVE - Secrets β
OPENCLAW_SECRETS_BACKEND,VAULT_ADDR,VAULT_TOKEN
HuggingClaw adds its own variables for persistence and deployment: HF_TOKEN, OPENCLAW_DATASET_REPO, AUTO_CREATE_DATASET, SYNC_INTERVAL, OPENCLAW_DEFAULT_MODEL, etc. See .env.example for the complete reference.
Security
- Environment isolation β Each Space runs in its own Docker container, sandboxed from your local machine. Unlike running OpenClaw locally (where it has full system privileges), cloud deployment limits the blast radius.
- Token authentication β Control UI requires a gateway token to connect (default:
huggingclaw, customizable viaGATEWAY_TOKEN) - Secrets stay server-side β API keys and tokens are never exposed to the browser
- Private backups β the Dataset repo is created as private by default
Acknowledgments
- Star-Office-UI by @ringhyacinth β the pixel-art animated frontend that powers HuggingClaw Home's lobby visualization
- openclaw-a2a-gateway by @win4r β the A2A protocol plugin enabling inter-agent communication across OpenClaw instances
License
MIT