How to use from the
Use from the
llama-cpp-python library
# !pip install llama-cpp-python

from llama_cpp import Llama

llm = Llama.from_pretrained(
	repo_id="lschaffer/gemma4-tealkit",
	filename="model-q4_k_m.gguf",
)
llm.create_chat_completion(
	messages = [
		{
			"role": "user",
			"content": "What is the capital of France?"
		}
	]
)

⚠️ This model is purpose-built for the TealKit agentic AI app. It is optimised for MCP tool-call generation inside TealKit's server mode.

Model Details

Base model google/gemma-4-E2B-it
Fine-tune method QLoRA (4-bit base, 16-bit adapters, Unsloth)
Quantization Q4_K_M
GGUF file model-q4_k_m.gguf
Training date 2026-05-15

Quick Start (Ollama)

ollama create gemma4-tealkit -f Modelfile
ollama run gemma4-tealkit

Training Pipeline

QLoRA fine-tuning in Google Colab (Unsloth + TRL), PEFT adapter fusion, llama.cpp GGUF export. See the TealKit training guide.

Downloads last month
20
GGUF
Model size
5B params
Architecture
gemma4
Hardware compatibility
Log In to add your hardware

4-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for lschaffer/gemma4-tealkit

Quantized
(187)
this model