--- title: PraisonChat emoji: 🐱 colorFrom: purple colorTo: blue sdk: docker app_port: 7860 pinned: false --- # PraisonChat 🐱 An **OpenClaw-style** multi-agent AI chat powered by **PraisonAI** and **LongCat Flash Lite**. ## Features - 🤖 Dynamic sub-agent creation — the main agent spawns specialized agents per task - 🔧 Auto tool creation — agents write their own Python tools on the fly - 💬 Streaming responses with real-time agent step visibility - 📝 Full markdown + code highlighting - 🌓 Dark/light mode - 💾 Local chat history - 🎯 Multi-model support (Flash Lite, Flash Chat, Flash Thinking) ## Setup 1. Get a free LongCat API key at [longcat.chat/platform](https://longcat.chat/platform) 2. Enter your API key in Settings (⚙️) 3. Start chatting! ## Models | Model | Context | Speed | Free Quota | |-------|---------|-------|------------| | LongCat Flash Lite | 320K | Fastest | 50M tokens/day | | LongCat Flash Chat | 256K | Fast | 500K tokens/day | | LongCat Flash Thinking | 256K | Medium | 500K tokens/day | ## Environment Variables (optional) Set `LONGCAT_API_KEY` as a Space Secret to pre-configure the API key for all users. ## Architecture ``` User Message │ ▼ Main Orchestrator Agent (analyzes task, plans execution) │ ├── Creates Sub-Agent 1 with custom tools ├── Creates Sub-Agent 2 with custom tools └── Creates Sub-Agent N with custom tools │ ▼ Each agent executes its task │ ▼ Orchestrator synthesizes → Final response to user ```