Safetensors
GGUF
mistral
lora
tool-calling
function-calling
mcp
openclaw
fine-tuned

MistralClaw - Mistral 7B Fine-tuned for MCP Tool Calling

Fine-tuned Mistral-7B-Instruct-v0.2 for reliable tool/function calling, optimized for OpenClaw MCP orchestration.

Model Details

  • Base Model: mistralai/Mistral-7B-Instruct-v0.2
  • Training: LoRA (rank 64, alpha 128) via Together AI
  • Training Data: 12,054 examples from xlam, glaive, hermes-fc, OpenHermes
  • Training Loss: 0.4219 (3 epochs)

Quick Start with Ollama

# 1. Pull base model
ollama pull mistral:7b-instruct-v0.2-q4_K_M

# 2. Download the LoRA adapter GGUF
# (download mistralclaw-lora.gguf from this repo)

# 3. Create Modelfile
cat > Modelfile << 'EOF'
FROM mistral:7b-instruct-v0.2-q4_K_M
ADAPTER ./mistralclaw-lora.gguf

PARAMETER temperature 0.1
PARAMETER top_p 0.9
PARAMETER num_predict 1024

SYSTEM """You are OpenClaw, a helpful AI assistant with access to tools. Use tools when needed by responding with JSON tool calls. For knowledge questions, answer directly."""
EOF

# 4. Create and run
ollama create mistralclaw -f Modelfile
ollama run mistralclaw

Files

File Description Size
mistralclaw-lora.gguf LoRA adapter in GGUF format (for Ollama) ~320MB
adapter_model.safetensors LoRA adapter weights (SafeTensors) ~640MB
adapter_config.json LoRA configuration 1KB
Modelfile Ollama Modelfile 1KB

Training Details

Trained on Together AI's A100 GPUs using LoRA fine-tuning on a mix of function-calling datasets:

  • Salesforce xlam-function-calling-60k
  • Glaive function-calling-v2
  • NousResearch hermes-function-calling-v1
  • OpenHermes multi-turn conversations

W&B Dashboard: https://wandb.ai/padmanabhg-freelance/together

Downloads last month
23
GGUF
Model size
0.2B params
Architecture
llama
Hardware compatibility
Log In to add your hardware

We're not able to determine the quantization variants.

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for padmanabh/mistralclaw-7b-tool-calling

Adapter
(1119)
this model

Datasets used to train padmanabh/mistralclaw-7b-tool-calling

Collection including padmanabh/mistralclaw-7b-tool-calling