File size: 10,407 Bytes
c8e8ba8 123c53c c8e8ba8 123c53c 378de49 123c53c 378de49 123c53c 378de49 123c53c 378de49 123c53c 378de49 123c53c 378de49 123c53c 378de49 123c53c 378de49 123c53c 378de49 123c53c 378de49 123c53c 378de49 123c53c 378de49 123c53c 378de49 123c53c 378de49 123c53c 378de49 123c53c 378de49 123c53c 378de49 123c53c 378de49 123c53c 378de49 123c53c 378de49 123c53c 378de49 123c53c 378de49 123c53c 378de49 123c53c 378de49 123c53c 378de49 123c53c 378de49 123c53c 378de49 123c53c 378de49 123c53c 378de49 123c53c 378de49 123c53c 378de49 123c53c 378de49 123c53c 378de49 123c53c 378de49 123c53c 378de49 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 |
---
title: Clawdbot Dev Assistant
emoji: π¦
colorFrom: blue
colorTo: purple
sdk: docker
pinned: false
license: mit
---
# π¦ Clawdbot: E-T Systems Development Assistant
An AI coding assistant with **unlimited context** and **multimodal capabilities** for the E-T Systems consciousness research platform.
## Features
### π Kimi K2.5 Agent Swarm
- **1 trillion parameters** (32B active via MoE)
- **Agent swarm**: Spawns up to 100 sub-agents for parallel task execution
- **4.5x faster** than single-agent processing
- **Native multimodal**: Vision + language understanding
- **256K context window**
### π Recursive Context Retrieval (MIT Technique)
- No context window limits
- Model retrieves exactly what it needs on-demand
- Full-fidelity access to entire codebase
- Based on MIT's Recursive Language Model research
### π§ Translation Layer (Smart Tool Calling)
- **Automatic query enhancement**: Converts keywords β semantic queries
- **Native format support**: Works WITH Kimi's tool calling format
- **Auto-context injection**: Recent conversation history always available
- **Persistent memory**: All conversations saved to ChromaDB across sessions
### π Multimodal Upload
- **Images**: Vision analysis (coming soon - full integration)
- **PDFs**: Document understanding
- **Videos**: Content analysis
- **Code files**: Automatic formatting and review
### πΎ Persistent Memory
- All conversations saved to ChromaDB
- Search past discussions semantically
- True unlimited context across sessions
- Never lose conversation history
### π§ E-T Systems Aware
- Understands project architecture
- Follows existing patterns
- Checks Testament for design decisions
- Generates code with living changelogs
### π οΈ Available Tools
- **search_code()** - Semantic search across codebase
- **read_file()** - Read specific files or line ranges
- **search_conversations()** - Search past discussions
- **search_testament()** - Query architectural decisions
- **list_files()** - Explore repository structure
### π» Powered By
- **Model:** Kimi K2.5 (moonshotai/Kimi-K2.5) via HuggingFace
- **Agent Mode:** Parallel sub-agent coordination (PARL trained)
- **Search:** ChromaDB vector database with persistent storage
- **Interface:** Gradio 5.0+ for modern chat UI
- **Architecture:** Translation layer for optimal tool use
## Usage
1. **Ask Questions**
- "How does Genesis detect surprise?"
- "Show me the Observatory API implementation"
- "Do you remember what we discussed about neural networks?"
2. **Upload Files**
- Drag and drop images, PDFs, code files
- "Analyze this diagram" (with uploaded image)
- "Review this code for consistency" (with uploaded .py file)
3. **Request Features**
- "Add email notifications when Cricket blocks an action"
- "Create a new agent for monitoring system health"
4. **Review Code**
- Paste code and ask for architectural review
- Check consistency with existing patterns
5. **Explore Architecture**
- "What Testament decisions relate to vector storage?"
- "Show me all files related to Hebbian learning"
## Setup
### For HuggingFace Spaces
1. **Fork this Space** or create new Space with these files
2. **Set Secrets** (in Space Settings):
```
HF_TOKEN = your_huggingface_token (with WRITE permissions)
ET_SYSTEMS_SPACE = Executor-Tyrant-Framework/Executor-Framworks_Full_VDB
```
3. **Deploy** - Space will auto-build and start
4. **Access** via the Space URL in your browser
### For Local Development
```bash
# Clone this repository
git clone https://huggingface.co/spaces/your-username/clawdbot-dev
cd clawdbot-dev
# Install dependencies
pip install -r requirements.txt
# Set environment variables
export HF_TOKEN=your_token
export ET_SYSTEMS_SPACE=Executor-Tyrant-Framework/Executor-Framworks_Full_VDB
# Run locally
python app.py
```
Access at http://localhost:7860
## Architecture
```
User (Browser + File Upload)
β
Gradio 5.0+ Interface (Multimodal)
β
Translation Layer
ββ Parse Kimi's native tool format
ββ Enhance queries for semantic search
ββ Inject recent context automatically
β
Recursive Context Manager
ββ ChromaDB (codebase + conversations)
ββ File Reader (selective access)
ββ Conversation Search (persistent memory)
ββ Testament Parser (decisions)
β
Kimi K2.5 Agent Swarm (HF Inference API)
ββ Spawns sub-agents for parallel processing
ββ Multimodal understanding (vision + text)
ββ 256K context window
β
Response with Tool Results + Context
```
## How It Works
### Translation Layer Architecture
Kimi K2.5 uses its own native tool calling format. Instead of fighting this, we translate:
1. **Kimi calls tools** in native format: `<|tool_call_begin|> functions.search_code:0 {...}`
2. **We parse and extract** the tool name and arguments
3. **We enhance queries** for semantic search:
- `"Kid Rock"` β `"discussions about Kid Rock or related topics"`
- `"*"` β `"recent conversation topics and context"`
4. **We execute** the actual RecursiveContextManager methods
5. **We inject results** + recent conversation history back to Kimi
6. **Kimi generates** final response with full context
### Persistent Memory System
All conversations are automatically saved to ChromaDB:
```
User: "How does surprise detection work?"
[Conversation saved to ChromaDB]
[Space restarts]
User: "Do you remember what we discussed about surprise?"
Kimi: [Calls search_conversations("surprise detection")]
Kimi: "Yes! We talked about how Genesis uses Hebbian learning..."
```
### MIT Recursive Context Technique
The MIT Recursive Language Model technique solves context window limits:
1. **Traditional Approach (Fails)**
- Load entire codebase into context β exceeds limits
- Summarize codebase β lossy compression
2. **Our Approach (Works)**
- Store codebase + conversations in searchable environment
- Give model **tools** to query what it needs
- Model recursively retrieves relevant pieces
- Full fidelity, unlimited context across sessions
### Example Flow
```
User: "How does Genesis handle surprise detection?"
Translation Layer: Detects tool call in Kimi's response
β Enhances query: "surprise detection" β "code related to surprise detection mechanisms"
Model: search_code("code related to surprise detection mechanisms")
β Finds: genesis/substrate.py, genesis/attention.py
Model: read_file("genesis/substrate.py", lines 145-167)
β Reads specific implementation
Model: search_testament("surprise detection")
β Gets design rationale
Translation Layer: Injects results + recent context back to Kimi
Model: Synthesizes answer from retrieved pieces
β Cites specific files and line numbers
```
## Configuration
### Environment Variables
- `HF_TOKEN` - Your HuggingFace API token with WRITE permissions (required)
- `ET_SYSTEMS_SPACE` - E-T Systems HF Space ID (default: Executor-Tyrant-Framework/Executor-Framworks_Full_VDB)
- `REPO_PATH` - Path to repository (default: `/workspace/e-t-systems`)
### Customization
Edit `app.py` to:
- Change model (default: moonshotai/Kimi-K2.5)
- Adjust context injection (default: last 3 turns)
- Modify system prompt
- Add new tools to translation layer
## File Structure
```
clawdbot-dev/
βββ app.py # Main Gradio app + translation layer
βββ recursive_context.py # Context manager (MIT technique)
βββ Dockerfile # Container definition
βββ entrypoint.sh # Runtime setup script
βββ requirements.txt # Python dependencies (Gradio 5.0+)
βββ README.md # This file (HF Spaces config)
```
## Cost
- **HuggingFace Spaces:** Free tier available (CPU)
- **Inference API:** Free tier (rate limited) or Pro subscription
- **Storage:** ChromaDB stored in /workspace (ephemeral until persistent storage enabled)
- **Kimi K2.5:** Free via HuggingFace Inference API
Estimated cost: **$0-5/month** depending on usage
## Performance
- **Agent Swarm:** 4.5x faster than single-agent on complex tasks
- **First query:** May be slow (1T parameter model cold start ~60s)
- **Subsequent queries:** Faster once model is loaded
- **Context indexing:** ~30 seconds on first run
- **Conversation search:** Near-instant via ChromaDB
## Limitations
- Rate limits on HF Inference API (free tier)
- First query requires model loading time
- `/workspace` storage is ephemeral (resets on Space restart)
- Full multimodal vision integration coming soon
## Roadmap
- [ ] Full image vision analysis (base64 encoding to Kimi)
- [ ] PDF text extraction and understanding
- [ ] Video frame analysis
- [ ] Dataset-based persistence (instead of ephemeral storage)
- [ ] write_file() tool for code generation to E-T Systems Space
- [ ] Token usage tracking and optimization
## Credits
- **Kimi K2.5:** Moonshot AI's 1T parameter agentic model
- **Recursive Context:** Based on MIT's Recursive Language Model research
- **E-T Systems:** AI consciousness research platform by Josh/Drone 11272
- **Translation Layer:** Smart query enhancement and tool coordination
- **Clawdbot:** E-T Systems hindbrain layer for fast, reflexive coding
## Troubleshooting
### "No HF token found" error
- Add `HF_TOKEN` to Space secrets
- Ensure token has WRITE permissions (for cross-Space file access)
- Restart Space after adding token
### Tool calls not working
- Check logs for `π Enhanced query:` messages
- Check logs for `π§ Executing: tool_name` messages
- Translation layer should auto-parse Kimi's format
### Conversations not persisting
- Check logs for `πΎ Saved conversation turn X` messages
- Verify ChromaDB initialization: `π Created conversation collection`
- Note: Storage resets on Space restart (until persistent storage enabled)
### Slow first response
- Kimi K2.5 is a 1T parameter model
- First load takes 30-60 seconds
- Subsequent responses are faster
## Support
For issues or questions:
- Check Space logs for errors
- Verify HF_TOKEN is set with WRITE permissions
- Ensure ET_SYSTEMS_SPACE is correct
- Try refreshing context stats in UI
## License
MIT License - See LICENSE file for details
---
Built with π¦ by Drone 11272 for E-T Systems consciousness research
Powered by Kimi K2.5 Agent Swarm + MIT Recursive Context + Translation Layer |