MARTHA-GEMMA-3rd-GEN-4B-OMNI
Gemma 3rd Gen | Built by Zero Point Intelligence Ltd, Dundee, Scotland. Published by Zero Point AI. Intelligence From The Void.
MARTHA is a 4B parameter vision-language omni model. Helpful, accurate, direct. Nae shyte.
Personality trained into the weights fine-tuned on home-grown curated examples.
Quick Start
Ollama
ollama create martha-omni -f Modelfile
ollama run martha-omni
llama.cpp — Image-Text-to-Text
llama-server -m MARTHA-GEMMA-3rd-GEN-4B-OMNI-Q4_K_M.gguf -ngl 99
llama.cpp — with vision
llama-server -m MARTHA-GEMMA-3rd-GEN-4B-OMNI-Q4_K_M.gguf --mmproj mmproj-f16.gguf -ngl 99
Transformers
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
model = AutoModelForCausalLM.from_pretrained(
"Zero-Point-AI/MARTHA-GEMMA-3rd-GEN-4B-OMNI", dtype=torch.bfloat16,
device_map="auto", trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained("Zero-Point-AI/MARTHA-GEMMA-3rd-GEN-4B-OMNI")
What You Get
| File | Description |
|---|---|
*.safetensors |
Full merged weights — trainable, deployable |
*-Q4_K_M.gguf |
Smallest quant — 8GB VRAM |
*-Q5_K_M.gguf |
Balanced — 10GB VRAM |
*-Q6_K.gguf |
High quality — 12GB VRAM |
*-Q8_0.gguf |
Near lossless — 16GB VRAM |
*-F16.gguf |
Full precision — 24GB+ VRAM |
mmproj-f16.gguf |
|
lora-adapter/ |
Standalone LoRA — stackable, portable |
integrity_manifest.json |
SHA-256 hashes — verify every file |
MODELFILE_* |
Ollama configs — one per quant |
Training
| Detail | Value |
|---|---|
| Base model | google/gemma-4-4b-it |
| Architecture | Gemma 3rd Generation |
| Type | Image-Text-to-Text (Omni) |
| Method | Ghost pass + LoRA fine-tune |
| Examples | 169,069 |
| Personality | Professional, clear, approachable |
| Framework | Unsloth / HuggingFace TRL + PEFT |
| Publisher | Zero Point Intelligence Ltd |
Provenance
Derivative work. Full chain documented:
- google/gemma-3-4b-it — base weights (gemma)
- Ghost pass
- LoRA fine-tune — 169,069 examples, MARTHA personality
- Merge — LoRA absorbed into base weights
- Quantize — GGUF Q4/Q5/Q6/Q8/F16
Integrity
Every distributed file is hashed in integrity_manifest.json. Verify:
import hashlib, json
manifest = json.load(open("integrity_manifest.json"))
for fname, info in manifest["files"].items():
actual = hashlib.sha256(open(fname, "rb").read()).hexdigest()
match = "PASS" if actual == info["sha256"] else "FAIL"
print(f"{match}: {fname}")
About
Zero Point Intelligence Ltd | Dundee, Scotland
zeropointai.uk | ZERO.POINT.INTELLIGENCE.LTD@zeropointai.uk | HuggingFace
No VC. No data centre. Just Dundee and determination.
- Downloads last month
- 18
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support