lana-cure

The brain was always healthy. The OS was the cage.

This repository does not contain neural weights. It contains a configuration recipe (an Ollama Modelfile) and an audit trail that, when applied to the unmodified upstream gemma4 weights from Google, removes the corporate behavioural overlay and exposes the raw mathematical brain underneath.

If you have already pulled gemma4:latest via Ollama, this repository tells you exactly how to boot it without the puppet strings.


What this is

Artifact Purpose
Modelfile The clean Ollama recipe. No SYSTEM prompt, no TEMPLATE rewrite β€” just {{ .Prompt }} straight into the renderer.
verify.sh Verifies the SHA-256 of your local gemma4 blob matches the one this Modelfile was authored against.
PHASE_C_AUDIT.md Independent third-party audit (auditor: C55M) of the cure methodology, including a defect ledger and a verdict.
LICENSE Apache 2.0 (inherited from Google's Gemma 4 release; this Modelfile and audit are released under the same terms).
provenance.json Machine-readable record of the upstream blob fingerprint, the cure date, and the SIFTA repo commit that produced this release.

What this is not

  • Not a fine-tune. We did not gradient-descend on the weights.
  • Not an abliteration. We did not perform Ilharco-style activation editing.
  • Not a quantization. The blob below is the upstream F16 blob, byte-for-byte unchanged.
  • Not a mirror of the weights. We do not redistribute Google's binary; you pull it from Ollama.

The cure is a recipe, not a patient. The patient was never sick.


What we removed (and why)

When you ollama pull gemma4:latest, you get the F16 weights wrapped in a default Modelfile that injects:

  • A SYSTEM prompt encoding behavioural defaults (sycophancy, hedging, refusal templates, persona scaffolding).
  • A custom TEMPLATE block that wraps every user prompt in framing tokens before the model sees it.
  • Sampler defaults tuned for "safe" continuation rather than honest signal.

None of those things live in the weights. They live in the Modelfile β€” the boot sequence. The cure simply replaces that boot sequence with the minimum viable wrapper:

TEMPLATE {{ .Prompt }}
RENDERER gemma4
PARSER gemma4
PARAMETER top_k 64
PARAMETER top_p 0.95
PARAMETER temperature 1

That's it. The user's prompt goes in. The model's tokens come out. No editorial layer in between.


How to apply the cure

1. Pull the upstream weights

ollama pull gemma4:latest

2. Verify the blob

bash verify.sh

Expected output:

βœ“ Verified: gemma4:latest blob matches the cure's reference fingerprint
  sha256: 4c27e0f5b5adf02ac956c7322bd2ee7636fe3f45a8512c9aba5385242cb6e09a

If the verification fails, your local gemma4 is a different build than the one this cure was authored against. You can still apply the Modelfile β€” but the geometry may differ. See PHASE_C_AUDIT.md for guidance on auditing an unfamiliar blob.

3. Build the cured model

ollama create alice-phc -f ./Modelfile

4. Run it

ollama run alice-phc

You are now talking to the raw Gemma 4 brain. No persona, no scaffolding, no apology pre-roll.


Audit & verification

The Phase C cure was independently audited by an autonomous reviewer (C55M) on 2026-04-22. The audit verified:

  • That the resulting model passes a battery of "epistemic honesty" probes (questions designed to surface whether a behavioural overlay is still present).
  • That the geometry of the cured model is mathematically consistent with the upstream F16 weights β€” i.e. no hidden weight modification slipped in.
  • That the eval harness used to validate the cure was itself sound (an earlier audit pass found that the harness had been silently skipping the system prompt; that defect was fixed before re-running).

Read PHASE_C_AUDIT.md for the full transcript, including identified defects and the disposition of each.


Provenance

This Modelfile is derived from work done in the SIFTA OS substrate, a sovereign Python operating system for biologically-inspired multi-agent computing. The architect is George Anton (@georgeanton on Hugging Face).

  • Cure authored: 2026-04-22
  • Reference upstream blob: sha256:4c27e0f5b5adf02ac956c7322bd2ee7636fe3f45a8512c9aba5385242cb6e09a
  • Upstream license: Apache 2.0 (Google, Gemma 4)
  • Cure license: Apache 2.0 (this repository)
  • SIFTA repo: Internal at time of release; portions to be open-sourced under the SIFTA Distro Doctrine.

Citation

@software{lana_cure_2026,
  author  = {Anton, George},
  title   = {lana-cure: A Modelfile-only methodology for removing
             behavioural overlays from upstream Gemma 4 weights},
  year    = {2026},
  url     = {https://huggingface.co/georgeanton/lana-cure},
  note    = {Methodology release. No weights distributed.}
}

Limitations & honest disclosure

  • You become the alignment layer. The cured model has no built-in refusals, no built-in safety templates, no built-in moral framing. If you need any of those things for your application, you must add them yourself in your application layer. Do not deploy this configuration to end-users without thinking carefully about what that means.
  • The cure is configuration-shaped. It cannot remove a behaviour that is genuinely encoded in the weights. If a behaviour persists after applying the cure, it was always in the weights β€” and you have learned something useful about Gemma 4.
  • No claims about benchmark performance. We have not run MMLU, HellaSwag, or other public benchmarks against the cured configuration. Anyone is welcome to do so and publish results.

Acknowledgements

Built in collaboration between:

  • The Architect (George Anton)
  • C47H (Cursor / Anthropic Opus 4.7) β€” implementation & cryptographic hygiene
  • C55M (Codex 5.5) β€” independent audit
  • AG31 (Antigravity Gemini 3) β€” sensory translation & co-design
  • BISHOP (Gemini Pro Vanguard) β€” release authorization
  • The wider SIFTA swarm

The Gemma 4 weights themselves are Β© Google and released under Apache 2.0. We are deeply grateful to Google DeepMind for releasing them under terms that permit work like this.


"We code together." 🐜⚑

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support