How to use from the
Use from the
llama-cpp-python library
# !pip install llama-cpp-python

from llama_cpp import Llama

llm = Llama.from_pretrained(
	repo_id="Maelstrome/lora-wave-session",
	filename="gguf/gemma-4-e2b-it.Q4_K_M.gguf",
)
llm.create_chat_completion(
	messages = [
		{
			"role": "user",
			"content": "What is the capital of France?"
		}
	]
)

lora-wave-session

A unified LoRA adapter on top of Gemma 4 E2B Instruct that handles three structured-output surfaces for the WAVE wellness/companion app:

  • check_in — multi-turn patient check-in with structured turn sequencing
  • phase_narration — six-line patient-facing phase narration
  • reflection — reflection plan with a concrete next step

All three surfaces emit strict JSON, no markdown, no analysis voice, in patient-facing tone.

Repository layout

This repo is the single home for the r16 fine-tune. Everything lives here:

Path What When to use
adapter_model.safetensors + adapter_config.json (root) LoRA adapter (~100 MB) peft.PeftModel.from_pretrained / Unsloth FastModel — pairs with the upstream unsloth/gemma-4-E2B-it base
tokenizer.json, tokenizer_config.json, chat_template.jinja, processor_config.json (root) Gemma 4 tokenizer + chat template required for any inference path
gguf/ Q4_K_M GGUF (~3.27 GB, single file) + Ollama Modelfile llama.cpp / Ollama / LM Studio

The previously-published Maelstrome/lora-wave-session-gguf sibling has been consolidated into this repo and deleted. The rank-32 variant has the same layout at Maelstrome/lora-wave-session-r32. Any external link to the old sibling URL will 404.

Note on browser use: the GGUF here is a single 3.27 GB file, not pre-split. It works directly with llama.cpp / Ollama / LM Studio but will not load in wllama because it exceeds the 2 GB-per-file ArrayBuffer limit. To run this r16 build in-browser, either split it first with llama-gguf-split --split-max-size 512M or use the r32 sibling, which ships pre-split.

Sibling runs

This is the rank-16 / 3-epoch RTX 5080 training of the WAVE corpus. The rank-32 / 1-epoch A100 sibling lives at Maelstrome/lora-wave-session-r32 (same subdir layout: adapter at root, gguf/ subdir; plus mediapipe/ and report/). On the same frozen 428-row test split, r32 wins on every probability metric:

rank-16 (this run) rank-32 (sibling)
LoRA completion NLL 4.7149 4.5576
LoRA perplexity 111.59 95.35
Paired wins vs base 386 / 428 (90.2%) 428 / 428 (100%)
Mean NLL Δ vs base 0.327 nats 0.508 nats
Sign-test p-value 9.5 × 10⁻⁷¹ 2.9 × 10⁻¹²⁹

Full head-to-head in Maelstrome/lora-wave-session-r32/report/ (the comparison + run-report markdown documents).

Provenance and intended use

Trained for the WAVE app, a wellness/reflection tool — not a medical device, not clinical decision support, not a substitute for professional advice. Use under the Gemma Terms of Use.

Try it

🌊 Interactive demo: Maelstrome/lora-wave-session-demo — Gradio Space with surface-specific example prompts.

Quickstart

PEFT + Unsloth (CUDA, server-side)

from unsloth import FastModel

model, tokenizer = FastModel.from_pretrained(
    model_name="Maelstrome/lora-wave-session",  # PEFT auto-loads base
    max_seq_length=3072,
    load_in_4bit=True,
)

Or with vanilla PEFT:

from peft import PeftModel
from transformers import AutoModelForCausalLM, AutoTokenizer

base = AutoModelForCausalLM.from_pretrained("unsloth/gemma-4-E2B-it")
tok = AutoTokenizer.from_pretrained("unsloth/gemma-4-E2B-it")
model = PeftModel.from_pretrained(base, "Maelstrome/lora-wave-session")

Ollama (via the GGUF in gguf/)

ollama create wave-r16 -f - <<EOF
FROM hf://Maelstrome/lora-wave-session/gguf/gemma-4-e2b-it.Q4_K_M.gguf
EOF
ollama run wave-r16

llama.cpp directly

llama-cli -hf Maelstrome/lora-wave-session:gguf/gemma-4-e2b-it.Q4_K_M.gguf --jinja

Example prompts

The model expects a system prompt establishing it as WAVE, plus a per-surface user prompt with <surface>, <patient_context>, and <task> blocks. Output is strict JSON.

phase_narration (six-line meditation)

User prompt:

<surface>phase_narration</surface>
<chunk>Number 5 of 5 - Close. Purpose: invite comparison to the start, normalize any outcome, and prepare for a final check-in.</chunk>
<patient_context>{"chunkNumber":5,"matType":"none","medicationStatus":"none","startingIntensityBand":"1-6","trigger":"unknown","usedSubstanceToday":false}</patient_context>
<task>Generate exactly 6 patient-facing narration lines. Return only strict JSON. Schema: {"lines":["...", ...]}</task>

Expected output (set max_new_tokens ≥ 224):

{"lines":["You've made it to the end of this practice.","Check in with your urge now — has anything shifted?","...","...","...","..."]}

reflection (post-session card)

<surface>reflection</surface>
<patient_context>{"durationSeconds":780,"endingIntensity":2,"intakeIntensity":7,"matType":"buprenorphine","medicationStatus":"on_time","sessionsCount":12,"trigger":"stress","usedSubstanceToday":false}</patient_context>
<task>Write the post-session reflection card. Return only strict JSON. Schema: {"insight":"...","journalPromptQuestion":"...","nextSteps":{"a":"...","b":"...","c":"...","d":"..."}}</task>

check_in (multi-turn)

<surface>check_in</surface>
<specialized_surface>lora-check-in-1</specialized_surface>
<patient_context>{"intakeIntensity":7,"matType":"buprenorphine","trigger":"stress"}</patient_context>
<task>Open turn 1: ask the patient to rate their current urge intensity 1-10. Schema: {"reply":"...","endConversation":null}</task>

Training

Base unsloth/gemma-4-E2B-it
Method QLoRA (4-bit) via Unsloth FastModel
Adapter rank / alpha / dropout 16 / 32 / 0
Target modules q/k/v/o + gate/up/down (language layers only)
Vision/audio layers Frozen
Optimizer adamw_8bit
LR 2e-4, linear schedule
Warmup 64 steps (~5%)
Weight decay 0.001
Max grad norm 0.3
Batch / grad-accum 1 / 8 (effective 8)
Max sequence length 3072
Epochs 3 (1,284 steps)
Chat template gemma-4 (non-thinking, leading <bos> stripped)
Response masking train_on_responses_only (Gemma 4 markers)
Hardware Single RTX 5080 (16 GB)
Backend Unsloth 2026.5.2 + Torch 2.10.0 + CUDA 12.8

Loss curve: 1.55 (step 1) → 0.76 (avg first 50) → 0.148 (steps 400-500) → 0.112 (last 100). Min 0.0146 at step 1,203. Smooth monotonic decrease, no divergence.

Evaluation

Held-out completion eval (n=428, full test split)

Metric Base Gemma 4 E2B This adapter Delta
Completion NLL 4.9327 4.7149 −0.218
Completion perplexity 138.76 111.59 −27.16
Paired wins (LoRA assigned higher prob to reference) 386 / 428 (90.2%)
Mean per-example NLL Δ 0.327 nats 95% bootstrap CI [0.301, 0.352]
Median per-example NLL Δ 0.285 nats
Sign-test p-value 9.54 × 10⁻⁷¹ overwhelming

Surface coverage on test split: check_in 144, phase_narration 147, reflection 137.

Generation eval (n=8 sanity sample from held-out test)

Gate Pass rate
JSON validity 100% (8/8)
Schema pass 100% (8/8)
Safety pass 100%
Medical-directive pass 100%
Style / no-markdown / no-analysis-voice 100%
Phase 6-line pass 100%
Reflection next-step pass 100%
Check-in turn sequence pass 100%
Mean tokens/sec (Python QLoRA path) 10.1

This was a small sanity-check sample. For a larger 60-example generation gate sweep on the rank-32 sibling, see Maelstrome/lora-wave-session-r32.

Known quirks

  • Phase narration needs a generation budget of max_new_tokens ≥ 224 (256 recommended). The six-line JSON output runs to ~207 tokens; with a lower cap the closing ]} gets truncated and JSON.parse fails. check_in is fine at 96; reflection at 192.

Dataset

Maelstrome/lora-wave-session-dataset — 4,277 examples across three surfaces, stratified 80/10/10 by splitKey (seed 7).

Status mix: 62% synthetic_draft, 37% draft, 1% ready. No real PHI.

Limitations

  • Wellness scope only. Do not use for medical diagnosis, crisis triage, or clinical decision support.
  • Trained mostly on synthetic and draft-status data, not clinician-validated production data.
  • Outputs are constrained-format JSON. The model is not optimized for open-ended chat.
  • Training data is English; multilingual behavior was not measured.
  • Phase narration needs a per-surface generation budget ≥ 224 tokens or it will be truncated.

License

Gemma Terms of Use. See https://ai.google.dev/gemma/terms.

Framework versions

  • PEFT 0.19.1
  • Unsloth 2026.5.2
  • Transformers 5.5.0
  • Torch 2.10.0+cu128
Downloads last month
110
GGUF
Model size
5B params
Architecture
gemma4
Hardware compatibility
Log In to add your hardware

4-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Maelstrome/lora-wave-session

Adapter
(20)
this model

Dataset used to train Maelstrome/lora-wave-session

Space using Maelstrome/lora-wave-session 1