--- language: - en license: apache-2.0 library_name: peft base_model: unsloth/functiongemma-270m-it tags: - function-calling - tool-use - gemma3 - lora - peft datasets: - Salesforce/xlam-function-calling-60k - MadeAgents/xlam-irrelevance-7.5k pipeline_tag: text-generation --- # sumitagrawal/functiongemma-270m-tool-agent Fine-tuned [FunctionGemma 270M](https://huggingface.co/unsloth/functiongemma-270m-it) LoRA adapter specialized for **general tool/function calling**. | | Link | |---|---| | **Source code** | [tech-sumit/tool-agent](https://github.com/tech-sumit/tool-agent) | | **Blog post** | [sumitagrawal.dev/blog/finetuning-functiongemma-270m-tool-calling](https://sumitagrawal.dev/blog/finetuning-functiongemma-270m-tool-calling/) | | **Base model** | [unsloth/functiongemma-270m-it](https://huggingface.co/unsloth/functiongemma-270m-it) | ## Benchmark Results ![Benchmark Results](benchmark-results.png) Evaluated using [lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness) on 100 held-out general function-calling examples. End-to-end through the [tool agent](https://github.com/tech-sumit/tool-agent) pipeline: **14% → 57%** tool selection accuracy on a 7-query evaluation. ## Training - **Base model**: [`unsloth/functiongemma-270m-it`](https://huggingface.co/unsloth/functiongemma-270m-it) (Gemma 3 270M) - **Method**: [LoRA](https://arxiv.org/abs/2106.09685) (r=16, alpha=32) via [PEFT](https://huggingface.co/docs/peft) + [TRL](https://huggingface.co/docs/trl) SFTTrainer - **Dataset**: 13,000 general function-calling examples - **Epochs**: 3 - **Training time**: 25 minutes - **Hardware**: NVIDIA H100 SXM 80GB via [vast.ai](https://vast.ai) ### Data composition | Source | Examples | Purpose | |--------|----------|---------| | [Salesforce/xlam-function-calling-60k](https://huggingface.co/datasets/Salesforce/xlam-function-calling-60k) | ~10,000 | General function calling | | [MadeAgents/xlam-irrelevance-7.5k](https://huggingface.co/datasets/MadeAgents/xlam-irrelevance-7.5k) | ~3,000 | Negative examples / refusal | | **Total** | **~13,000** | | ## Usage ### With PEFT ```python from peft import PeftModel from transformers import AutoModelForCausalLM, AutoTokenizer base = AutoModelForCausalLM.from_pretrained("unsloth/functiongemma-270m-it", torch_dtype="auto") model = PeftModel.from_pretrained(base, "sumitagrawal/functiongemma-270m-tool-agent") model = model.merge_and_unload() tokenizer = AutoTokenizer.from_pretrained("sumitagrawal/functiongemma-270m-tool-agent") prompt = """user You are a model that can do function calling with the following functions {"name": "get_weather", "description": "Get current weather", "parameters": {"type": "object", "properties": {"city": {"type": "string"}}, "required": ["city"]}} {"name": "send_email", "description": "Send an email", "parameters": {"type": "object", "properties": {"to": {"type": "string"}, "subject": {"type": "string"}, "body": {"type": "string"}}, "required": ["to", "subject", "body"]}} What's the weather in Tokyo? model """ inputs = tokenizer(prompt, return_tensors="pt").to(model.device) out = model.generate(**inputs, max_new_tokens=128, temperature=0.1, do_sample=True) print(tokenizer.decode(out[0][inputs["input_ids"].shape[1]:], skip_special_tokens=True)) # call:get_weather{city:Tokyo} ``` ### With the Tool Agent Server ```bash git clone https://github.com/tech-sumit/tool-agent.git cd tool-agent pip install -e . TOOL_AGENT_BACKEND=transformers \ TOOL_AGENT_MODEL=./models/finetuned \ python -m agent.server # Server starts on http://localhost:8888 with REST, WebSocket, MCP, and A2A ``` ### With Ollama (GGUF) Export to GGUF first, then: ```bash ollama create tool-agent -f Modelfile ollama run tool-agent ``` ## Output format The model uses FunctionGemma's native control-token format: ``` call:function_name{param1:value1,param2:value2} ``` ## License Apache 2.0 (same as the base model).