gemma4-tealkit / README.md
lschaffer's picture
Upload README.md with huggingface_hub
70d55af verified
---
language:
- en
pipeline_tag: text-generation
tags:
- gguf
- ollama
- tool-calling
- mcp
- tealkit
- qlora
base_model:
- google/gemma-4-E2B-it
license: mit
---
> **⚠️ This model is purpose-built for the [TealKit](https://lschaffer.github.io/tealkit) agentic AI app.**
> It is optimised for MCP tool-call generation inside TealKit's server mode.
## Model Details
| | |
|---|---|
| Base model | [google/gemma-4-E2B-it](https://huggingface.co/google/gemma-4-E2B-it) |
| Fine-tune method | QLoRA (4-bit base, 16-bit adapters, Unsloth) |
| Quantization | Q4_K_M |
| GGUF file | `model-q4_k_m.gguf` |
| Training date | 2026-05-15 |
## Quick Start (Ollama)
```bash
ollama create gemma4-tealkit -f Modelfile
ollama run gemma4-tealkit
```
## Training Pipeline
QLoRA fine-tuning in Google Colab (Unsloth + TRL), PEFT adapter fusion, llama.cpp GGUF export.
See the [TealKit training guide](https://github.com/lschaffer/mobile_ai_agent/blob/master/scripts_training/README.md).