YAML Metadata Warning:empty or missing yaml metadata in repo card

Check out the documentation for more information.

Zero Point Intelligence Ltd β€” Getting Started

New to local AI? Start here.

Running AI on your own computer means no subscriptions, no data leaving your machine, no censorship, no limits. Here's how to get started in 5 minutes.


Step 1: What You Need

  • A computer (Windows, Mac, or Linux β€” any modern one works)
  • Ollama β€” free software that runs AI models β†’ Download here
  • A MARTHA model β€” pick one from our repos below

That's it. No Python. No coding. No cloud accounts.


Step 2: Install Ollama

Go to https://ollama.com and click Download. Install it like any normal app.


Step 3: Pick Your Model

Your Hardware Best Model VRAM Needed Download Size
Phone / Raspberry Pi MARTHA-0.8B 1 GB ~500 MB
Laptop / Old PC MARTHA-2B 2 GB ~1.5 GB
Gaming PC MARTHA-4B 4 GB ~2.7 GB
Beefy GPU (RTX 3080+) MARTHA-9B 8 GB ~5 GB

Not sure about your VRAM? Start with MARTHA-4B. It runs on almost anything with a GPU.


Step 4: Download and Run

Go to the model page and follow the Quick Start instructions. Each model has copy-paste commands for:

  • Windows (PowerShell)
  • Mac/Linux (Terminal)

Step 5: Want a Pretty Chat Interface?

Install Docker Desktop, then run:

docker run -d -p 3000:8080 -v open-webui:/app/backend/data -e OLLAMA_BASE_URL=http://host.docker.internal:11434 --name open-webui ghcr.io/open-webui/open-webui:main

Open http://localhost:3000 in your browser. It gives you a web chat interface but runs entirely on your machine.


What Makes MARTHA Different?

Every MARTHA model ships with two files:

  • The brain (.gguf) β€” handles text, code, reasoning
  • The eyes (mmproj-f16.gguf) β€” handles images, screenshots, photos

Most AI models on HuggingFace can only read text. MARTHA sees images too.


Jargon Buster

You'll See This It Means
GGUF File format for AI models β€” like .mp4 for videos
Q4, Q5, Q8 Compression levels β€” Q4 = small, Q8 = big but smarter
mmproj The vision file β€” gives the model eyes
Ollama Free app that runs AI locally
VRAM GPU memory β€” more = bigger models
Safetensors Raw model weights (for developers, not needed for basic use)
LoRA A training technique β€” how we customise the model's personality
Ghost pass Our method for creating unique model weights

About Us

Zero Point Intelligence Ltd β€” Independent AI research lab, Dundee, Scotland 🏴󠁧󠁒󠁳󠁣󠁴󠁿

We build open-source AI models that run on your hardware. No cloud. No gatekeepers. No bullshit.

Intelligence From The Void

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support