mistralclaw
Collection
4 items • Updated
Fine-tuned Mistral-7B-Instruct-v0.2 for reliable tool/function calling, optimized for OpenClaw MCP orchestration.
# 1. Pull base model
ollama pull mistral:7b-instruct-v0.2-q4_K_M
# 2. Download the LoRA adapter GGUF
# (download mistralclaw-lora.gguf from this repo)
# 3. Create Modelfile
cat > Modelfile << 'EOF'
FROM mistral:7b-instruct-v0.2-q4_K_M
ADAPTER ./mistralclaw-lora.gguf
PARAMETER temperature 0.1
PARAMETER top_p 0.9
PARAMETER num_predict 1024
SYSTEM """You are OpenClaw, a helpful AI assistant with access to tools. Use tools when needed by responding with JSON tool calls. For knowledge questions, answer directly."""
EOF
# 4. Create and run
ollama create mistralclaw -f Modelfile
ollama run mistralclaw
| File | Description | Size |
|---|---|---|
mistralclaw-lora.gguf |
LoRA adapter in GGUF format (for Ollama) | ~320MB |
adapter_model.safetensors |
LoRA adapter weights (SafeTensors) | ~640MB |
adapter_config.json |
LoRA configuration | 1KB |
Modelfile |
Ollama Modelfile | 1KB |
Trained on Together AI's A100 GPUs using LoRA fine-tuning on a mix of function-calling datasets:
W&B Dashboard: https://wandb.ai/padmanabhg-freelance/together
We're not able to determine the quantization variants.
Base model
mistralai/Mistral-7B-Instruct-v0.2