--- license: apache-2.0 base_model: google/functiongemma-270m-it tags: - gemma - function-calling - tool-use - gguf - ollama model-index: - name: delia-functiongemma-270m-gguf results: [] --- # Delia FunctionGemma 270M - GGUF This is the GGUF version of [delia-functiongemma-270m](https://huggingface.co/devopsforflops/delia-functiongemma-270m), fine-tuned for Delia MCP tool orchestration. ## Quick Start with Ollama ```bash # Download the GGUF file wget https://huggingface.co/devopsforflops/delia-functiongemma-270m-gguf/resolve/main/functiongemma-delia-f16.gguf # Create Modelfile cat > Modelfile << 'MODELFILE' FROM ./functiongemma-delia-f16.gguf TEMPLATE """{{ if .System }}developer {{ .System }} {{ end }}user {{ .Prompt }} model """ PARAMETER stop PARAMETER stop PARAMETER temperature 0.1 PARAMETER num_ctx 2048 MODELFILE # Import to Ollama ollama create functiongemma-delia -f Modelfile # Test it ollama run functiongemma-delia "Hello!" ``` ## Model Details | Property | Value | |----------|-------| | Base Model | google/functiongemma-270m-it | | Architecture | Gemma3 | | Parameters | 268M | | Quantization | F16 (full precision) | | File Size | ~518 MB | | Context Length | 2048 tokens | ## Training Fine-tuned using LoRA on Delia MCP tool calling examples: - LoRA rank: 16 - LoRA alpha: 64 - Epochs: 20 - Dataset: 27 training examples from Delia test suite ## Use with Delia Add to your Delia `settings.json`: ```json { "model_dispatcher": { "name": "functiongemma-delia", "num_ctx": 2048 } } ``` **Important:** The model name must contain "functiongemma" for Delia to apply the correct prompt formatting. ## Related Models - [delia-functiongemma-270m](https://huggingface.co/devopsforflops/delia-functiongemma-270m) - Full merged HuggingFace model - [delia-functiongemma-270m-lora](https://huggingface.co/devopsforflops/delia-functiongemma-270m-lora) - LoRA adapter only ## License Apache 2.0