--- base_model: unsloth/functiongemma-270m-it library_name: gguf license: gemma tags: - function-calling - tool-use - agent - llama-cpp - gguf - unsloth - llama-agent datasets: - victor/functiongemma-agent-sft pipeline_tag: text-generation --- # FunctionGemma Agent GGUF A fine-tuned version of [FunctionGemma-270M](https://huggingface.co/unsloth/functiongemma-270m-it) for agentic tool-calling tasks, converted to GGUF format for use with llama.cpp and [llama-agent](https://github.com/ggml-org/llama.cpp/tree/master/tools/agent). ## Model Details | Property | Value | |----------|-------| | Base Model | [unsloth/functiongemma-270m-it](https://huggingface.co/unsloth/functiongemma-270m-it) | | Fine-tuned Model | [victor/functiongemma-agent-finetuned](https://huggingface.co/victor/functiongemma-agent-finetuned) | | Training Dataset | [victor/functiongemma-agent-sft](https://huggingface.co/datasets/victor/functiongemma-agent-sft) | | Quantization | Q4_K_M (4-bit) | | Parameters | 270M | ## Training Fine-tuned using [Unsloth](https://github.com/unslothai/unsloth) with LoRA on HuggingFace Jobs infrastructure. **Training Configuration:** - LoRA rank: 128, alpha: 256 - Epochs: 3 - Learning rate: 2e-4 - Batch size: 4, gradient accumulation: 2 - Hardware: NVIDIA A100-80GB - Training method: SFT with `train_on_responses_only` **Dataset:** 7,500 synthetic examples covering: - Multi-step tool chaining (glob → read → edit) - Error recovery patterns - Clarification dialogs - No-tool responses - Parallel tool calls ## Tools The model is trained on 5 tools matching llama-agent: | Tool | Description | |------|-------------| | `read_file` | Read file contents with line numbers | | `write_file` | Create or overwrite a file | | `edit_file` | Find and replace text in a file | | `glob` | Find files matching pattern | | `bash` | Execute shell command | ## Usage ### With llama.cpp ```bash # Download wget https://huggingface.co/victor/functiongemma-agent-gguf/resolve/main/functiongemma-270m-it.Q4_K_M.gguf # Run inference ./llama-cli -m functiongemma-270m-it.Q4_K_M.gguf -p "user Read the main.py file model" ``` ### With llama-agent ```bash ./llama-agent -m functiongemma-270m-it.Q4_K_M.gguf ``` ## Format Uses FunctionGemma's native format with `` delimiters: ``` user Fix the typo in config.json model I need to find and read the config file first. call:glob{pattern:**/config.json} developer response:glob{stdout:src/config.json,stderr:,exit_code:0} ... ``` ## License This model inherits the [Gemma license](https://ai.google.dev/gemma/terms) from the base model. ## Links - Training script: [victor/llama-agent-training](https://huggingface.co/victor/llama-agent-training) - Dataset: [victor/functiongemma-agent-sft](https://huggingface.co/datasets/victor/functiongemma-agent-sft) - llama-agent: [github.com/ggml-org/llama.cpp/tools/agent](https://github.com/ggml-org/llama.cpp/tree/master/tools/agent)