dboa9
update
e787148
metadata
title: Moltbot Hybrid Engine
emoji: ⚖️
colorFrom: blue
colorTo: red
sdk: docker
pinned: false
app_port: 7860

Moltbot Hybrid Engine

Safe AI agent for legal document processing - Dual LLM backend + file matching + Clawdbot (OpenClaw) in the Space.

Version 7.0.0 - Last updated: 2026-02-14 — Clawdbot installed; gateway runs on port 18789, proxied at /gateway

Required Space Secrets

Set these in Space Settings > Repository secrets:

Secret Purpose
HF_TOKEN HuggingFace API token (get free at hf.co/settings/tokens) - enables HF Inference API for Qwen
MOLTBOT_API_KEY API key for authenticating requests (set any strong value)

LLM Backends

  1. Ollama (local in container) - runs qwen2.5:1.5b if binary installs correctly
  2. HF Inference API (fallback) - uses Qwen/Qwen2.5-7B-Instruct hosted by HuggingFace (requires HF_TOKEN)

Clawdbot (OpenClaw) in this Space

OpenClaw/Clawdbot is installed and started in this Space so you can use the autonomous agent with Qwen and Claude.

  • Control UI / WebChat: After the Space is running, open: https://<your-space>.hf.space/gateway
  • The gateway runs internally on port 18789; the FastAPI app proxies /gateway and /gateway/* to it so a single Space port (7860) serves both the API and Clawdbot.
  • To get autonomous behaviour (Clawdbot working on court bundle changes with Qwen/Claude):
    1. Open the Control UI at /gateway and complete pairing/setup (model, API keys).
    2. Add a skill that triggers your pipeline (e.g. run the web editor flow, call this Engine, run the bundler). Skills live in ~/.openclaw/workspace/skills/; you can add a custom skill that calls your court bundle project endpoints or scripts.
    3. Wire that skill to the same Qwen/Claude endpoints the editor uses (this Space’s /api/generate and your Claude API).