How to use from the
Use from the
llama-cpp-python library
# !pip install llama-cpp-python

from llama_cpp import Llama

llm = Llama.from_pretrained(
	repo_id="glody007/zamba-deforestation-detector",
	filename="",
)
llm.create_chat_completion(
	messages = "No input example has been defined for this model task."
)

zamba-deforestation-detector โ€” LFM2.5-VL-450M (GGUF)

Fine-tuned from LiquidAI/LFM2.5-VL-450M on simulated Sentinel-2 RGB+SWIR change-detection pairs over the Congo Basin (DRC), to flag near-real-time forest clearing on a 90-day window. Companion to the zamba-sat repo, built for the AI in Space hackathon (DPhi Space ร— Liquid AI).

Given four images per sample โ€” RGB(t-1), SWIR(t-1), RGB(t-0), SWIR(t-0) โ€” and a small text block (lat/lon, dates, region name), the model emits a chain-of-thought followed by a JSON change-detection record:

{
  "deforestation_detected": true,
  "change_pattern": "expansion",     // expansion | stable | cloud_artifact
  "trajectory_confidence": "medium", // low | medium | high
  "severity": "medium",              // none | low | medium | high
  "clearing_type": "small_holder",
  "area_bucket_t1": "1-5_ha",
  "area_bucket_t0": "0-1_ha",
  "active_operation": false,
  "active_machinery_visible": false,
  "smoke_or_fire_visible": false,
  "recent_road_construction": false,
  "frame_quality": ["good"]
}

The full XML chain-of-thought wraps the JSON with <frame_descriptions>, <change_analysis>, <final_pattern>, and <json> blocks โ€” see the zamba-sat README for the schema.

Eval results

Evaluated on the held-out subset of the glody007/zamba-sat-congo-deforestation dataset (90-day window, wide_window + wide_window_v2 runs). Ground truth from claude-opus-4-7.

field LFM2.5-VL-450M (base, Q8_0) LFM2.5-VL-450M Q8_0 (fine-tuned, this model)
valid_json 0% 67%
change_pattern (expansion / stable / cloud_artifact) 0% 50%
trajectory_confidence 0% 67%
composite (13 fields) 0.0% 38.5%

Held-out test set is small (N=6) โ€” the headline movement is the base model emitting placeholder prose vs the fine-tune emitting schema-compliant JSON with correct expansion/stable discrimination. The base-model 0% is on the dir-test (N=36); see zamba-sat/evals/EXPERIMENTS.md for the full experiment log including v1 (cloud-class collapse) and v3 (overfitting boundary).

Files

Running inference with a VLM in llama.cpp requires two GGUF files:

file description
zamba-deforestation-v2-Q8_0.gguf Language model backbone (Q8_0)
mmproj-zamba-deforestation-v2-Q8_0.gguf Vision tower + multimodal projector (F16)

Usage

llama-server

llama-server \
    -m zamba-deforestation-v2-Q8_0.gguf \
    --mmproj mmproj-zamba-deforestation-v2-Q8_0.gguf \
    --jinja --port 8000

Reproduce eval results

Clone zamba-sat and run:

git clone https://github.com/glody007/zamba-sat
cd zamba-sat
uv sync

# (1) Re-prep the dataset locally โ€” same stratified split used for training.
uv run scripts/prepare_finetune.py \
    --runs wide_window wide_window_v2 \
    --output data/finetune \
    --skip-clouds

# (2) Eval the fine-tune behind llama-server (started in another shell with
# the GGUFs from this repo).
uv run scripts/evaluate.py --backend local \
    --server-url http://localhost:8000 \
    --model zamba-deforestation-Q8_0 \
    --runs wide_window wide_window_v2 --split test \
    --splits-file data/finetune/splits.json

Training details

  • Base: LiquidAI/LFM2.5-VL-450M
  • Method: full SFT (no LoRA), via leap-finetune on Modal H100ร—1
  • Data: 24 stratified train rows after --skip-clouds (17 expansion + 7 stable); held-out test = 6 rows (4 expansion + 2 stable)
  • Hyperparameters: 5 effective epochs / 12 grad steps, effective batch size 8 (2 ร— 4 grad accum), LR 2e-5, cosine, warmup 0.03, seed 42

This is the v2 checkpoint, which on internal evals beat both v1 (cloud-class collapse from training without --skip-clouds) and v3 (overfit at 8 ep / ~25 grad steps). v2 is the apparent sweet spot given the dataset size; the next step before further training is to collect more stable labels.

Downloads last month
37
GGUF
Model size
0.4B params
Architecture
lfm2
Hardware compatibility
Log In to add your hardware

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for glody007/zamba-deforestation-detector

Quantized
(21)
this model