InvoiceBenchmark / experiment-notes-07.md
jngb-labs's picture
Upload folder using huggingface_hub
f0699d8 verified
# Article 07 — Experiment Notes
## Working Title
TBD (depends on results). Quantisation + vision on invoices.
## Premise
Article 06 tested five models reading text invoices. The best scored 83%. But text invoices are a gift — in reality, invoices are PDFs, scans, phone photos. Article 07 asks: what happens when the model has to *see* the invoice instead of *read* it? And what happens when you compress the model to fit on cheaper hardware?
## The Experiment
### Models — Google Gemma 4 family only
Three sizes, one vendor, one architecture lineage:
1. **Gemma 4 31B** (dense) — the largest. 31 billion parameters, all active on every token. This is the model that scored 83% on text invoices in article 06.
2. **Gemma 4 26B MoE** (Mixture of Experts) — 26 billion parameters total, but only ~4 billion active per token. 128 experts, 8 fire per token via learned routing. A shared expert (3x larger) handles general knowledge. In theory: 31B-class intelligence at 4B-class compute.
3. **Gemma 4 E4B** (dense, small) — 4.5 billion parameters. The edge model. Designed to run on phones and laptops.
All three are multimodal — they natively accept image input. Vision capabilities survive GGUF quantisation (requires mmproj projector file alongside the model).
**Important: Ollama only accepts PNG, JPG, and WebP as image input. Not PDF.** This is an Ollama limitation, not a model limitation — the Gemma 4 vision encoder works on pixel data, so any raster image format works, but Ollama's API doesn't handle PDF-to-image conversion. Invoices must be converted to PNG before being fed to the model. The PDF versions exist for distribution (HuggingFace dataset) and for other inference engines that might handle PDFs natively. For our Ollama-based benchmark, the pipeline is: Markdown → PDF (styled rendering) → PNG (for the model).
### Quantisation levels
For each model, test at:
- **FP16** (full precision, 16-bit) — the reference. Every parameter stored as a 16-bit floating point number. Maximum quality, maximum size.
- **Q8** (8-bit) — roughly half the size of FP16. Usually considered "lossless" in practice.
- **Q4_K_M** (4-bit, K-quant medium) — roughly quarter the size. The practical minimum for most tasks. This is where quality-vs-size tradeoffs start to bite.
The E4B (smallest model) runs unquantised at Q8 on the MacBook — no need to go lower. It's the "what if you don't compress at all but just use a smaller model?" baseline.
### Quantisation format
**GGUF** via llama.cpp / Ollama. Pre-quantised versions available from Unsloth on HuggingFace. No need to quantise ourselves.
This is **weight quantisation** — the model's learned parameters are stored at lower precision. This is NOT the same as:
- **KV cache quantisation** (TurboQuant, KIVI) — compresses the conversation memory at inference time, not the model itself
- **Activation quantisation** — compresses the intermediate computations during inference
Weight quantisation is permanent: you download a smaller file, it loads into less memory, and every inference uses the compressed weights. KV cache quantisation is dynamic: the model weights stay full precision, but the memory of the conversation is compressed on the fly. They solve different problems. Weight quantisation = "make the model smaller." KV cache quantisation = "make the conversation cheaper." They can be combined.
### Input format
The 200 existing synthetic invoices from article 06, rendered as styled PDFs and converted to PNG (200 DPI) for model input. Three formats exist in the dataset:
- **Markdown** (`output/invoices/`) — plain text, used in article 06
- **PDF** (`output/pdf/`) — styled single-page documents with tables, headers, formatted numbers
- **PNG** (`output/png/`) — 200 DPI raster images converted from PDF, actual input to Ollama
Visual properties preserved across formats:
- Table vs paragraph layout from the original corpus
- German/Swiss/English number formatting preserved visually
- All with the same cent-perfect ground truth from article 06
Future variation (not yet implemented):
- Multiple visual templates (3-5 styles)
- Different fonts
- Slight rotation (simulating a scan)
- Lower resolution variants (simulating a phone photo)
### Evaluation conditions
Same two conditions as article 06:
- **Autopilot:** model sees the image, reports the total
- **Hybrid:** model sees the image, extracts structured fields, Python recomputes the total
### What we measure
- Exact match accuracy (vs ground truth)
- Parse rate (did the model produce useable output?)
- Error detection rate (did it flag the 40 deliberately broken invoices?)
- Worst single error
- Time per invoice
- GPU memory usage
- The German comma: accuracy breakdown by number format (English vs Swiss vs German)
- The discount trap: accuracy on percentage discounts in hybrid mode
### Hardware
- **FP16 runs:** Scaleway H100 at €2.73/hr (same as article 06)
- **Q8/Q4 runs:** MacBook Air M3, 24GB RAM (same laptop as article 06)
- E4B all runs on MacBook
### Run matrix
| # | Model | Precision | Hardware | Est. memory |
|---|-------|-----------|----------|-------------|
| 1 | Gemma 31B dense | FP16 | H100 | ~62GB |
| 2 | Gemma 31B dense | Q8 | H100 | ~32GB |
| 3 | Gemma 31B dense | Q4_K_M | MacBook | ~18GB |
| 4 | Gemma 26B MoE | FP16 | H100 | ~52GB |
| 5 | Gemma 26B MoE | Q8 | H100 or MacBook | ~28GB |
| 6 | Gemma 26B MoE | Q4_K_M | MacBook | ~15GB |
| 7 | Gemma E4B | FP16 | MacBook | ~9GB |
| 8 | Gemma E4B | Q4_K_M | MacBook | ~5GB |
8 runs × 200 invoices × 2 conditions = 3,200 inference calls.
MacBook runs estimated at 30-60s per image. Weekend project.
---
## Article Content Requirements
### Must explain (for non-ML readers):
**How does a language model "see" an image? (LEAD WITH THIS)**
This is the central question the article must answer up front — before MoE, before quantisation, before results. The reader knows from article 06 that language models process text. Now we're feeding them pictures. The obvious question: how?
The answer, plainly:
1. **A language model cannot see.** It processes tokens — chunks of text. It has no eyes, no pixel processing, no concept of "looking at" anything. Left to its own devices, a language model receiving an image receives nothing.
2. **So you bolt on eyes.** A multimodal model like Gemma 4 has two systems bolted together:
- A **vision encoder** (typically a Vision Transformer, or ViT) — a separate neural network trained on images. This network takes a picture, chops it into a grid of patches (like cutting a photo into small squares), and converts each patch into a vector — a list of numbers that encodes "what's in this patch."
- A **projection layer** (the "connector" or "adapter") — a small neural network that translates the vision encoder's vectors into the same format as text tokens. It's a Rosetta Stone between the visual world and the language world.
3. **What the language model actually receives:** Not pixels. Not an image. A sequence of fake "tokens" that were generated by the vision encoder and translated by the projector. To the language model, an invoice image looks like a long sequence of tokens — just like a text prompt. It doesn't know they came from a picture. It just processes tokens the way it always does.
Analogy: imagine you're a book editor who only reads English. Someone hands you a Japanese document. You can't read it. But your colleague translates it into English and hands you the translation. You read the English version. You never learned Japanese — you're just trusting the translation. That's what a multimodal model does. The language model is the English-only editor. The vision encoder + projector is the translator.
4. **Why this matters for the experiment:**
- The vision encoder has its own quality ceiling. If it misreads a digit, the language model has no way to know — it received a confident-looking token, not a blurry pixel.
- Quantisation compresses the language model's weights. But the vision encoder (stored separately as the **mmproj** file in GGUF/Ollama) may or may not be compressed. If the projector is quantised, the "translation" degrades — the language model receives worse tokens.
- A text invoice is direct input — the model reads the actual characters. An image invoice is indirect — the model reads a *description* of the characters as interpreted by a separate neural network. Every error in article 06 still applies, plus a new category: translation errors from the vision pipeline.
5. **The punchline for the article:** In article 06, the model read the invoice. In article 07, the model reads a translation of a photograph of the invoice. We added two layers of indirection and asked whether the answer stays the same.
Style note: This should open the article, right after the premise. Before the reader sees any results, they need to understand that "multimodal" doesn't mean the model grew eyes. It means someone bolted a camera onto a typewriter and wrote a translator in between. The FT Alphaville voice should have fun with this — "the model doesn't see your invoice; it reads a hallucinated transcription of your invoice generated by a separate neural network that was never told what an invoice is."
**What is Mixture of Experts?**
- The concept: instead of one big brain, 128 small specialists. A router looks at each token and picks 8 experts to handle it.
- Why it matters: you get 26B worth of knowledge but only pay for 4B of compute per token.
- The risk: quantisation can corrupt the router. If the router sends tokens to the wrong experts, the model doesn't just get a bit worse — it gets unpredictably wrong.
- Needs a simple visual analogy. Hospital with 128 specialists vs one GP? Law firm with 128 specialists vs one generalist?
**What is quantisation?**
- The concept: storing numbers with less precision. A 16-bit number can represent 65,536 distinct values. A 4-bit number can represent 16. You're rounding every parameter in the model to the nearest value on a coarser grid.
- Simple example: imagine you have a ruler marked in millimetres (FP16). Now replace it with a ruler marked in centimetres (Q4). You can still measure things, but you lose the fine detail. A 7.3mm measurement becomes "1cm." Most of the time this doesn't matter. Sometimes it does — and you can't predict when.
- Why it works: most parameters in a neural network are close to zero. The information loss from rounding is small in aggregate. Until it isn't.
- What it is NOT: it's not lossy compression like JPEG. It's more like reducing the resolution of every number in the model permanently.
**Quantisation is not quantisation — the zoo of methods**
The article must make clear: "quantisation" is not one thing. It's a family of techniques that compress different parts of the system in different ways. The reader who googles "quantisation" will find a dozen acronyms. We need to map the territory before we pick our spot on it.
**A. What you're compressing (the target):**
| Target | What it does | When it happens | Analogy |
|--------|-------------|-----------------|---------|
| **Weight quantisation** | Compresses the model's learned parameters (the "brain") | Once, before inference. You download a smaller file. | Shrinking the textbook permanently — fewer pages, coarser print |
| **KV cache quantisation** | Compresses the conversation memory (what the model remembers about this chat) | Dynamically, during inference. Model weights stay full size. | Shorthand notes instead of full transcripts — written on the fly, discarded after |
| **Activation quantisation** | Compresses the intermediate calculations during a forward pass | Dynamically, during inference | Doing mental arithmetic in round numbers instead of exact decimals |
These solve **completely different problems:**
- Weight quantisation → "Can I fit this model on my laptop?" (article 07 — this experiment)
- KV cache quantisation → "Can I serve 10x more users on the same GPU?" (article 08 — Chipspotting / TurboQuant)
- Activation quantisation → "Can I make each inference faster on specialised hardware?" (not our problem)
They can be stacked. A Q4 weight-quantised model with TurboQuant KV cache compression and INT8 activations. Three layers of compression hitting three different targets. But for this experiment we're only touching the first one.
**B. How you compress the weights (the method):**
Even within weight quantisation, there are fundamentally different approaches:
| Method | Full name | How it works | Needs training data? | Quality |
|--------|-----------|-------------|---------------------|---------|
| **PTQ** | Post-Training Quantisation | Take a finished model, round the weights. No retraining. | No (or a small calibration set) | Good enough at 8-bit, degrades at 4-bit |
| **QAT** | Quantisation-Aware Training | Train the model knowing it will be quantised — the model learns to be robust to rounding | Yes (full training run) | Better at low bit-widths, but expensive |
| **GPTQ** | GPT-Quantisation | PTQ but smarter — uses a calibration dataset to find the rounding that minimises output error, layer by layer | Small calibration set (~128 examples) | Very good at 4-bit, the standard for GPU inference |
| **AWQ** | Activation-Aware Weight Quantisation | Like GPTQ but protects "important" weights — finds which weights matter most (based on activation patterns) and keeps those at higher precision | Small calibration set | Often better than GPTQ at 4-bit, especially for smaller models |
| **GGUF / llama.cpp quants** | K-quants (Q4_K_M, Q5_K_S, etc.) | PTQ with per-block scaling factors and mixed precision — different layers get different bit-widths based on importance | No | The CPU/Mac standard. What Ollama uses. What we're using. |
| **BnB** | BitsAndBytes | Dynamic quantisation at load time — quantise on the fly when moving from disk to GPU | No | Easy to use, slightly worse than GPTQ/AWQ |
**C. Why we're using GGUF K-quants (and why it probably doesn't matter much):**
Our choice: **GGUF format, K-quant method (Q8_0, Q4_K_M), via Ollama/llama.cpp.**
Why:
1. **Ollama uses GGUF.** We're running on a MacBook. Ollama is the inference engine. It only speaks GGUF. Decision made.
2. **Pre-quantised models exist.** Unsloth publishes GGUF quants for Gemma 4 on HuggingFace. No work required on our end.
3. **K-quants are state of the art for CPU inference.** They use mixed precision per block — "important" layers keep more bits, less important layers get fewer. It's not as sophisticated as AWQ/GPTQ (which use calibration data), but for the GGUF/CPU ecosystem, K-quants are the standard.
Does the method matter? **At Q8: almost certainly not.** All methods converge to near-lossless at 8-bit. The differences emerge at Q4 and below, where smarter methods (AWQ, GPTQ) can sometimes preserve quality that naive rounding destroys. But:
- We're comparing against an FP16 baseline on the same task, so any quality loss is visible regardless of method.
- We're using what a real user would use (Ollama + pre-built GGUF). The experiment tests the practical question ("if I download the Q4 model from HuggingFace and run it, do my invoices still work?"), not the theoretical question ("what's the optimal quantisation algorithm for invoice processing?").
- If Q4_K_M fails catastrophically on invoices, the article can note: "a fancier method (AWQ, GPTQ) might claw back some accuracy — but you'd need a GPU and a calibration dataset. On a MacBook, this is what you get."
**D. The TurboQuant distinction (what this is NOT):**
TurboQuant (Google, ICLR 2026) is **KV cache quantisation**, not weight quantisation. It compresses the conversation memory, not the model itself:
- The model weights stay at full precision (FP16 or BF16)
- The KV cache (the stored key/value pairs from previous tokens — essentially the model's short-term memory of the current conversation) gets compressed from 16-bit to 3-4 bits per element
- This means you can have longer conversations or serve more users on the same GPU
- The model itself doesn't get smaller — you can't run it on a laptop because of TurboQuant
To be precise about what KV cache IS: every time a transformer processes a token, it produces a "key" vector and a "value" vector for that token at every layer. These get stored so the model doesn't have to recompute them. For a 128K-token conversation, that's millions of vectors sitting in GPU memory. TurboQuant compresses those stored vectors. It's the difference between "make the model's brain smaller" (weight quantisation) and "make the model's notepad smaller" (KV cache quantisation).
Why this matters for the article: the reader who reads "quantisation makes AI cheaper" needs to know there are at least two completely different things being compressed. Samsung's stock didn't fall because of Q4_K_M. It fell because of TurboQuant. Different target, different implication, different article (→ Chipspotting, article 08).
**E. Summary for the article — the one-paragraph version:**
"When people say 'quantised model,' they usually mean weight quantisation — the brain got smaller. When Google published TurboQuant and Samsung's stock fell, that was KV cache quantisation — the notepad got smaller. When NVIDIA talks about INT8 inference, that's activation quantisation — the mental arithmetic got faster. This article tests weight quantisation only: we took the same model, crushed its brain from 16-bit to 4-bit, and asked whether it can still read an invoice. The other kinds of compression are someone else's article."
### Voice and style
- FT Alphaville dry humour throughout
- Self-Delusion Cycle restarts at Phase I ("it has eyes now")
- Direct callbacks to article 06 results (83% text baseline)
- Technical but not academic — explain everything, assume intelligence but not expertise
- Tables for results, not charts (matches previous articles)
- Bold takeaways in "What This Suggests"
- The narrative arc depends on results — framing TBD after experiments
---
## Open Questions
1. ~~**Rendering script:** How to convert Markdown invoices to styled PDFs/PNGs?~~ **DONE.** ReportLab for MD→PDF, pdf2image/poppler for PDF→PNG. 200 PDFs + 200 PNGs generated. Single visual template for now — multiple templates are a future extension.
2. ~~**Ollama vision support:**~~ **RESOLVED.** Ollama supports Gemma 4 multimodal. Syntax: pass PNG/JPG file paths in the `images` field of the API call or drag files in interactive mode. **PDFs are NOT accepted** — must convert to PNG first. The Python API: `ollama.chat(model='gemma4:e4b', messages=[{'role': 'user', 'content': '...', 'images': ['path.png']}])`.
3. **MoE quantisation quality:** Research says "expert-shift" can corrupt routing. Is this observable on invoices? The German comma disaster from article 06 could get worse if the routing sends German-formatted numbers to the wrong expert.
4. **Benchmark harness:** Can we adapt `run_benchmark.py` from article 06? Need to add image input path, keep everything else (scoring, CSV output, conditions B and C).
5. **Timeline:** Rendering script → test runs on 10 invoices → full runs → analysis → article. Two weekends?
---
## Connections to Previous Articles
- **Article 06:** Same invoices, same ground truth, same evaluation. Direct accuracy comparison: text vs vision, full precision vs quantised.
- **Article 05:** Same Scaleway H100 setup. "Our old friend the rented GPU."
- **Article 03:** Reasoning models underperformed. MoE is a different kind of "smart architecture" — does it also disappoint?
- **Article 04:** The TF-IDF word counter beat 70B LLMs. Will a 4-bit model on a laptop beat a full-precision model on an H100?
---
## HuggingFace Artifacts
### 1. Image dataset: `jngb-labs/InvoiceBenchmark-Vision`
The primary artifact. The 200 text invoices from article 06, rendered as styled images (PNG) with visual variation. Same ground truth JSONs, paired with images instead of Markdown.
Structure:
```
InvoiceBenchmark-Vision/
├── images/ # 200 rendered invoice PNGs
│ ├── INV-2026-0001.png
│ ├── INV-2026-0002.png
│ └── ...
├── ground_truth/ # Same JSONs from text dataset (symlinked or copied)
│ ├── INV-2026-0001.json
│ └── ...
├── manifest.csv # invoice_id, image_path, ground_truth_path, template, font, rotation, resolution
├── rendering_metadata.json # Which template, font, rotation, DPI was used per invoice
├── render_invoices.py # The rendering script (reproducible)
└── README.md # Dataset card
```
Dataset card should:
- Link back to text version (`jngb-labs/InvoiceBenchmark`)
- Explain that this is the same corpus, same ground truth, different modality
- Include baseline vision results from this experiment
- Tag: `image-to-text`, `document-understanding`, `ocr-benchmark`, `invoice-processing`
Why this matters: there is no existing controlled invoice vision benchmark on HuggingFace with cent-perfect ground truth and five controlled dimensions. OCR benchmarks exist, but they test character recognition — this tests *comprehension*. The model must read the number AND understand the invoice logic (VAT, discounts, number formats).
### 2. HuggingFace Space (after results)
Interactive leaderboard. Visitors pick model + quantisation level, see accuracy, worst error, example failures. Could include:
- Results table with filters (model, quant level, number format, layout)
- "Gallery of failures" — side-by-side: invoice image, model output, ground truth
- Comparison toggle: text results (article 06) vs vision results (article 07)
Tech: Gradio or static HTML. Free hosting on HF Spaces.
Build AFTER the experiment — needs results data.
### 3. Benchmark results on model cards (optional)
HuggingFace lets you attach evaluation results to model pages. Could add InvoiceBenchmark scores to the Gemma 4 model cards so they show up when people browse models. Lower priority but good visibility.
---
## Chipspotting (Article 08?)
The TurboQuant / KV cache / Samsung / Jevons paradox article is a separate piece. Different experiment, different framing, different thesis. Keep it for later — it doesn't connect to the invoice series arc. See `/chipspotting-outline.md` for the full plan.