--- base_model: unsloth/mistral-7b-v0.3-bnb-4bit tags: - text-generation-inference - transformers - unsloth - mistral - trl license: apache-2.0 language: - en --- # 🚀 OpenClaw Continuous Pretraining Model (README.md) 👉 **Try it instantly on Colab:** [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1BwrFHtGHNQl5hXp8AHI2SJbiwEK3qwQM?usp=sharing) --- ## 💡 Ask anything about OpenClaw This model is continuously pretrained on **OpenClaw `.md` files**, making it highly specialized for understanding, explaining, and helping you work with the OpenClaw ecosystem. You can ask things like: * How to set up OpenClaw * How to use OpenClaw with Docker * Debugging issues * Understanding configs, workflows, and usage --- ## 🧠 Model Details * **Base Model:** Mistral 7B * **Training Type:** Continuous Pretraining (LoRA Adapter) * **Dataset:** OpenClaw Markdown files (`.md`) * **Framework:** Unsloth + Hugging Face Transformers * **Optimization:** 4-bit quantization support --- ## ⚡ Quick Start (Inference Code) ```python from unsloth import FastLanguageModel import torch max_seq_length = 2048 # Supports RoPE scaling internally dtype = None # Auto detect (Float16 / BFloat16) load_in_4bit = True # Reduce memory usage from transformers import TextStreamer model, tokenizer = FastLanguageModel.from_pretrained( model_name="unsloth/mistral-7b-v0.3", max_seq_length=2048, ) # Load OpenClaw adapter model.load_adapter("Ishant06/OpenClaw-Continuous-Pretraining") # Device setup device = "cuda" if torch.cuda.is_available() else "cpu" # ---- TEST INPUT ---- prompt = "how to use openclaw with docker?" inputs = tokenizer( prompt, return_tensors="pt" ).to(device) # Generate output outputs = model.generate( **inputs, max_new_tokens=2048, temperature=0.7, top_p=0.9, do_sample=True, ) # Decode response response = tokenizer.decode(outputs[0], skip_special_tokens=True) print("\n=== RESPONSE ===\n") print(response) ``` --- ## 🔥 Features * 📚 Trained on real OpenClaw documentation * ⚡ Fast inference using Unsloth * 🧠 Better understanding of structured `.md` data * 💻 Efficient on low VRAM (4-bit quantization) --- ## 🛠️ Use Cases * OpenClaw documentation assistant * Developer Q&A bot * Debugging and setup guidance * Learning OpenClaw faster --- ## 📌 Notes * This is a **LoRA adapter**, not a full standalone model * Requires base model: `unsloth/mistral-7b-v0.3` * Best suited for OpenClaw-related queries --- ## ⭐ Support If you find this useful: * ⭐ Star the repo * 🤝 Share with others * 🛠️ Contribute improvements # Uploaded model - **Developed by:** Ishant06 - **License:** apache-2.0 - **Finetuned from model :** unsloth/mistral-7b-v0.3-bnb-4bit This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) [](https://github.com/unslothai/unsloth)