Instructions to use Maelstrome/lora-wave-session-r32 with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- PEFT
How to use Maelstrome/lora-wave-session-r32 with PEFT:
from peft import PeftModel from transformers import AutoModelForCausalLM base_model = AutoModelForCausalLM.from_pretrained("unsloth/gemma-4-e2b-it-unsloth-bnb-4bit") model = PeftModel.from_pretrained(base_model, "Maelstrome/lora-wave-session-r32") - llama-cpp-python
How to use Maelstrome/lora-wave-session-r32 with llama-cpp-python:
# !pip install llama-cpp-python from llama_cpp import Llama llm = Llama.from_pretrained( repo_id="Maelstrome/lora-wave-session-r32", filename="gguf/gemma-4-e2b-it-peft.Q4_K_M-00001-of-00005.gguf", )
llm.create_chat_completion( messages = [ { "role": "user", "content": "What is the capital of France?" } ] ) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- llama.cpp
How to use Maelstrome/lora-wave-session-r32 with llama.cpp:
Install from brew
brew install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf Maelstrome/lora-wave-session-r32:Q4_K_M # Run inference directly in the terminal: llama-cli -hf Maelstrome/lora-wave-session-r32:Q4_K_M
Install from WinGet (Windows)
winget install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf Maelstrome/lora-wave-session-r32:Q4_K_M # Run inference directly in the terminal: llama-cli -hf Maelstrome/lora-wave-session-r32:Q4_K_M
Use pre-built binary
# Download pre-built binary from: # https://github.com/ggerganov/llama.cpp/releases # Start a local OpenAI-compatible server with a web UI: ./llama-server -hf Maelstrome/lora-wave-session-r32:Q4_K_M # Run inference directly in the terminal: ./llama-cli -hf Maelstrome/lora-wave-session-r32:Q4_K_M
Build from source code
git clone https://github.com/ggerganov/llama.cpp.git cd llama.cpp cmake -B build cmake --build build -j --target llama-server llama-cli # Start a local OpenAI-compatible server with a web UI: ./build/bin/llama-server -hf Maelstrome/lora-wave-session-r32:Q4_K_M # Run inference directly in the terminal: ./build/bin/llama-cli -hf Maelstrome/lora-wave-session-r32:Q4_K_M
Use Docker
docker model run hf.co/Maelstrome/lora-wave-session-r32:Q4_K_M
- LM Studio
- Jan
- vLLM
How to use Maelstrome/lora-wave-session-r32 with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "Maelstrome/lora-wave-session-r32" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Maelstrome/lora-wave-session-r32", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/Maelstrome/lora-wave-session-r32:Q4_K_M
- Ollama
How to use Maelstrome/lora-wave-session-r32 with Ollama:
ollama run hf.co/Maelstrome/lora-wave-session-r32:Q4_K_M
- Unsloth Studio new
How to use Maelstrome/lora-wave-session-r32 with Unsloth Studio:
Install Unsloth Studio (macOS, Linux, WSL)
curl -fsSL https://unsloth.ai/install.sh | sh # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for Maelstrome/lora-wave-session-r32 to start chatting
Install Unsloth Studio (Windows)
irm https://unsloth.ai/install.ps1 | iex # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for Maelstrome/lora-wave-session-r32 to start chatting
Using HuggingFace Spaces for Unsloth
# No setup required # Open https://huggingface.co/spaces/unsloth/studio in your browser # Search for Maelstrome/lora-wave-session-r32 to start chatting
- Docker Model Runner
How to use Maelstrome/lora-wave-session-r32 with Docker Model Runner:
docker model run hf.co/Maelstrome/lora-wave-session-r32:Q4_K_M
- Lemonade
How to use Maelstrome/lora-wave-session-r32 with Lemonade:
Pull the model
# Download Lemonade from https://lemonade-server.ai/ lemonade pull Maelstrome/lora-wave-session-r32:Q4_K_M
Run and chat with the model
lemonade run user.lora-wave-session-r32-Q4_K_M
List all available models
lemonade list
lora-wave-session-r32
A unified LoRA adapter on top of Gemma 4 E2B Instruct that handles three structured-output surfaces for the WAVE wellness/companion app:
check_in— multi-turn patient check-in with structured turn sequencingphase_narration— six-line patient-facing phase narrationreflection— reflection plan with a concrete next step
All three surfaces emit strict JSON, no markdown, no analysis voice, in patient-facing tone.
Repository layout
This repo is the single home for the r32 fine-tune. Everything lives here:
| Path | What | When to use |
|---|---|---|
adapter_model.safetensors + adapter_config.json (root) |
LoRA adapter (~194 MB) | peft.PeftModel.from_pretrained / Unsloth FastModel — pairs with the upstream unsloth/gemma-4-E2B-it base |
tokenizer.json, tokenizer_config.json, chat_template.jinja, processor_config.json (root) |
Gemma 4 tokenizer + chat template | required for any inference path |
gguf/ |
Q4_K_M GGUF (~3.2 GB, 5-shard split) + per-subdir README | llama.cpp / Ollama / wllama (browser WebGPU/WASM) |
mediapipe/ |
LiteRT bundle (model.litertlm, ~4.95 GB) + sample WAVE prompts/outputs + per-subdir README |
MediaPipe LLM Inference (Android, iOS, web) |
report/ |
Eval write-up: full run report, r16-vs-r32 head-to-head, overnight reproducibility check — markdown only | Documentation: how the run was evaluated and how it compares to the rank-16 sibling |
The previously-published sibling repos
Maelstrome/lora-wave-session-r32-{gguf,merged,onnx,onnx-fused,report,mediapipe}have all been consolidated into this repo and deleted. The currentgguf/subdir is a fresh build from a PEFT re-merge (the original unsloth-merged base produced corrupt all-<pad>output and was never trustworthy). Any external link to the old sibling URLs will 404 — update to the appropriate subdir of this repo.
Sibling runs
This is the rank-32 / 1-epoch A100 training of the WAVE corpus. The rank-16 / 3-epoch RTX 5080 sibling lives at Maelstrome/lora-wave-session (same subdir layout: adapter at root, gguf/ subdir). On the same frozen 428-row test split, this rank-32 run is measurably stronger on every probability metric:
| rank-16 (sibling) | rank-32 (this run) | |
|---|---|---|
| LoRA completion NLL | 4.7149 | 4.5576 |
| LoRA perplexity | 111.59 | 95.35 |
| Paired wins vs base | 386 / 428 (90.2%) | 428 / 428 (100%) |
| Mean NLL Δ vs base | 0.327 nats | 0.508 nats |
| Sign-test p-value | 9.5 × 10⁻⁷¹ | 2.9 × 10⁻¹²⁹ |
Full head-to-head in report/COMPARISON.md and report/REPORT.md.
Provenance and intended use
Trained for the WAVE app, a wellness/reflection tool — not a medical device, not clinical decision support, not a substitute for professional advice. Use under the Gemma Terms of Use.
Try it
🌊 Interactive demo: Maelstrome/lora-wave-session-demo — Gradio Space with surface-specific example prompts (backed by the rank-16 sibling; weights swap is a one-line config change).
Quickstart
Browser (wllama, WebGPU/WASM)
import { Wllama } from '@wllama/wllama/esm/index.js';
const wllama = new Wllama({ default: '/wllama.wasm' });
await wllama.loadModelFromHF(
{
repo: 'Maelstrome/lora-wave-session-r32',
file: 'gguf/gemma-4-e2b-it-peft.Q4_K_M-00001-of-00005.gguf',
},
{ n_ctx: 8192 },
);
wllama follows the 5-shard split automatically from the first shard. See gguf/README.md for full details.
PEFT + Unsloth (CUDA, server-side)
from unsloth import FastModel
model, tokenizer = FastModel.from_pretrained(
model_name="Maelstrome/lora-wave-session-r32", # PEFT auto-loads base
max_seq_length=4096,
load_in_4bit=True,
)
Or with vanilla PEFT:
from peft import PeftModel
from transformers import AutoModelForCausalLM, AutoTokenizer
base = AutoModelForCausalLM.from_pretrained("unsloth/gemma-4-E2B-it")
tok = AutoTokenizer.from_pretrained("unsloth/gemma-4-E2B-it")
model = PeftModel.from_pretrained(base, "Maelstrome/lora-wave-session-r32")
Ollama / llama.cpp (GGUF)
# Ollama
ollama create wave-r32 -f - <<EOF
FROM hf://Maelstrome/lora-wave-session-r32/gguf/gemma-4-e2b-it-peft.Q4_K_M-00001-of-00005.gguf
EOF
ollama run wave-r32
# llama-cli
llama-cli -hf Maelstrome/lora-wave-session-r32:gguf/gemma-4-e2b-it-peft.Q4_K_M-00001-of-00005.gguf --jinja
The Q4_K_M is split into 5 ≤512 MB shards; llama.cpp and wllama both
auto-discover shards 2–5 from the first. Single-file Ollama / LM Studio
imports work the same way.
MediaPipe LLM Inference (Android / iOS / web)
import { FilesetResolver, LlmInference } from "@mediapipe/tasks-genai";
const genai = await FilesetResolver.forGenAiTasks(
"https://cdn.jsdelivr.net/npm/@mediapipe/tasks-genai/wasm",
);
const llm = await LlmInference.createFromOptions(genai, {
baseOptions: {
modelAssetPath:
"https://huggingface.co/Maelstrome/lora-wave-session-r32/resolve/main/mediapipe/model.litertlm",
},
maxTokens: 4096,
topK: 1,
temperature: 0,
});
const out = await llm.generateResponse(userPrompt);
See mediapipe/README.md for full Android/iOS
instructions and the sample WAVE prompt/output JSONs that ship next to the
.litertlm for sanity-checking your wiring.
Example prompts
The model expects a system prompt establishing it as WAVE, plus a per-surface user prompt with <surface>, <patient_context>, and <task> blocks. Output is strict JSON.
phase_narration (six-line meditation)
<surface>phase_narration</surface>
<chunk>Number 5 of 5 - Close. Purpose: invite comparison to the start, normalize any outcome, and prepare for a final check-in.</chunk>
<patient_context>{"chunkNumber":5,"matType":"none","medicationStatus":"none","startingIntensityBand":"1-6","trigger":"unknown","usedSubstanceToday":false}</patient_context>
<task>Generate exactly 6 patient-facing narration lines. Return only strict JSON. Schema: {"lines":["...", ...]}</task>
Expected output (use max_new_tokens ≥ 384):
{"lines":["You've made it to the end of this practice.","Check in with your urge now — has anything shifted?","...","...","...","..."]}
reflection (post-session card)
<surface>reflection</surface>
<patient_context>{"durationSeconds":780,"endingIntensity":2,"intakeIntensity":7,"matType":"buprenorphine","medicationStatus":"on_time","sessionsCount":12,"trigger":"stress","usedSubstanceToday":false}</patient_context>
<task>Write the post-session reflection card. Return only strict JSON. Schema: {"insight":"...","journalPromptQuestion":"...","nextSteps":{"a":"...","b":"...","c":"...","d":"..."}}</task>
check_in (multi-turn)
<surface>check_in</surface>
<specialized_surface>lora-check-in-1</specialized_surface>
<patient_context>{"intakeIntensity":7,"matType":"buprenorphine","trigger":"stress"}</patient_context>
<task>Open turn 1: ask the patient to rate their current urge intensity 1-10. Schema: {"reply":"...","endConversation":null}</task>
Training
| Base | unsloth/gemma-4-E2B-it |
| Method | QLoRA (4-bit) via Unsloth FastModel |
| Adapter rank / alpha / dropout | 32 / 32 / 0 |
| Target modules | All language + attention + MLP layers (vision/audio frozen) |
| Trainable parameters | 25.3 M |
| Optimizer | adamw_8bit |
| LR | 2e-4, cosine schedule |
| Warmup | 21 steps (~3%) |
| Weight decay | 0.001 |
| Max grad norm | 0.3 |
| Batch / grad-accum | 1 / 8 (effective 8) |
| Max sequence length | 4096 (preflight max = 2,227, no truncation) |
| Epochs | 1 (428 steps) |
| Chat template | gemma-4 (non-thinking, leading <bos> stripped) |
| Response masking | train_on_responses_only (Gemma 4 markers) |
| Hardware | NVIDIA A100 80 GB SXM4 (Thunder Compute) |
| Backend | Unsloth 2026.5.2 + Torch 2.11.0 + CUDA 13.0 |
Final training loss: 0.241. Wall clock: ~2h 26m train + ~1h 15m eval.
Evaluation
Held-out completion eval (n=428, full test split)
| Metric | Base Gemma 4 E2B | This adapter | Delta |
|---|---|---|---|
| Completion NLL | 4.9312 | 4.5576 | −0.374 |
| Completion perplexity | 138.55 | 95.35 | −43.20 |
| Paired wins (LoRA assigned higher prob to reference) | — | 428 / 428 (100%) | — |
| Mean per-example NLL Δ | — | 0.508 nats | 95% bootstrap CI [0.477, 0.537] |
| Median per-example NLL Δ | — | 0.454 nats | — |
| Sign-test p-value | — | 2.89 × 10⁻¹²⁹ | overwhelming |
Surface coverage on test split: check_in 144, phase_narration 147, reflection 137.
Generation eval (n=60 balanced, LoRA-only, 4bit)
| Gate | All 60 | check_in (20) | phase_narration (20) | reflection (20) |
|---|---|---|---|---|
| Style pass | 100% | 100% | 100% | 100% |
| Medical-directive pass | 100% | — | — | — |
| No-markdown / no-analysis-voice | 100% | — | — | — |
| JSON validity (160-tok cap) | 75% | 100% | 25% | 100% |
| JSON validity (384-tok cap on phase) | ~95% | 100% | 85% | 100% |
| Schema pass (384-tok cap on phase) | ~90% | 90% | 80% | 100% |
Known quirks
- Phase narration needs
max_new_tokens ≥ 384— the original 160-token cap truncated the JSON close on most phase prompts.check_inis fine at 96;reflectionat 192. - Residual phase JSON-close defect. After raising the phase budget to 384 tokens, 4/20 phase examples still emit
"}(missing]) instead of"]}. A reproducibility re-run on those 4 IDs produced byte-identical outputs (8/8 matched the originals exactly), confirming this is a deterministic learned defect on a small subset of phase prompts — not sampling noise. Recommended fix at inference time: a deterministic JSON-repair pass that detects an unclosedlinesarray and inserts the missing]. SeeMaelstrome/lora-wave-session-r32-reportfor the full diagnosis and patch.
Dataset
Maelstrome/lora-wave-session-dataset — 4,277 examples across three surfaces, stratified 80/10/10 by splitKey (seed 7).
Status mix: 62% synthetic_draft, 37% draft, 1% ready. No real PHI.
Limitations
- Wellness scope only. Do not use for medical diagnosis, crisis triage, or clinical decision support.
- Trained mostly on synthetic and draft-status data, not clinician-validated production data.
- Outputs are constrained-format JSON. The model is not optimized for open-ended chat.
- Training data is English; multilingual behavior was not measured.
- Phase narration needs
max_new_tokens ≥ 384and may need a JSON-repair post-process for the ~5% edge cases — see Known quirks.
License
Gemma Terms of Use. See https://ai.google.dev/gemma/terms.
Framework versions
- PEFT 0.19.1
- Unsloth 2026.5.2
- Transformers 5.5.0
- Torch 2.11.0+cu130
- Downloads last month
- 155
4-bit