mistralclaw
Collection
4 items • Updated
Fine-tuned Mistral-7B-Instruct-v0.2 for multi-step tool calling, function calling, and AI agent orchestration.
Built for OpenClaw - an open-source personal AI agent.
| Source | Examples | Purpose |
|---|---|---|
| Salesforce/xlam-function-calling-60k | 5,000 | Verified function calling |
| glaiveai/glaive-function-calling-v2 | 5,000 | Multi-turn tool use |
| NousResearch/hermes-function-calling-v1 | 1,893 | Hermes-style tool calls |
| teknium/OpenHermes-2.5 | 1,500 | No-tool knowledge (negative examples) |
from peft import PeftModel
from transformers import AutoModelForCausalLM, AutoTokenizer
base_model = AutoModelForCausalLM.from_pretrained("mistralai/Mistral-7B-Instruct-v0.2")
model = PeftModel.from_pretrained(base_model, "padmanabh/mistralclaw-lora")
tokenizer = AutoTokenizer.from_pretrained("mistralai/Mistral-7B-Instruct-v0.2")
The model outputs tool calls as text with the [TOOL_CALLS] prefix:
[TOOL_CALLS] [{"name": "gmail_send", "arguments": "{\"to\": \"john@example.com\", \"subject\": \"Meeting\", \"body\": \"See you tomorrow\"}"}]
Tool results are provided as user messages with [TOOL_RESULT]:
[TOOL_RESULT] gmail_send: {"status": "success", "message_id": "abc123"}
Training tracked at: wandb.ai/padmanabhg-freelance/together
Apache 2.0
Built for Mistral AI Worldwide Hackathon - Tokyo Edition | Track 02: Fine-Tuning by W&B
Base model
mistralai/Mistral-7B-Instruct-v0.2