Instructions to use Maelstrome/lora-wave-session with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- PEFT
How to use Maelstrome/lora-wave-session with PEFT:
from peft import PeftModel from transformers import AutoModelForCausalLM base_model = AutoModelForCausalLM.from_pretrained("unsloth/gemma-4-e2b-it-unsloth-bnb-4bit") model = PeftModel.from_pretrained(base_model, "Maelstrome/lora-wave-session") - llama-cpp-python
How to use Maelstrome/lora-wave-session with llama-cpp-python:
# !pip install llama-cpp-python from llama_cpp import Llama llm = Llama.from_pretrained( repo_id="Maelstrome/lora-wave-session", filename="gguf/gemma-4-e2b-it.Q4_K_M.gguf", )
llm.create_chat_completion( messages = [ { "role": "user", "content": "What is the capital of France?" } ] ) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- llama.cpp
How to use Maelstrome/lora-wave-session with llama.cpp:
Install from brew
brew install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf Maelstrome/lora-wave-session:Q4_K_M # Run inference directly in the terminal: llama-cli -hf Maelstrome/lora-wave-session:Q4_K_M
Install from WinGet (Windows)
winget install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf Maelstrome/lora-wave-session:Q4_K_M # Run inference directly in the terminal: llama-cli -hf Maelstrome/lora-wave-session:Q4_K_M
Use pre-built binary
# Download pre-built binary from: # https://github.com/ggerganov/llama.cpp/releases # Start a local OpenAI-compatible server with a web UI: ./llama-server -hf Maelstrome/lora-wave-session:Q4_K_M # Run inference directly in the terminal: ./llama-cli -hf Maelstrome/lora-wave-session:Q4_K_M
Build from source code
git clone https://github.com/ggerganov/llama.cpp.git cd llama.cpp cmake -B build cmake --build build -j --target llama-server llama-cli # Start a local OpenAI-compatible server with a web UI: ./build/bin/llama-server -hf Maelstrome/lora-wave-session:Q4_K_M # Run inference directly in the terminal: ./build/bin/llama-cli -hf Maelstrome/lora-wave-session:Q4_K_M
Use Docker
docker model run hf.co/Maelstrome/lora-wave-session:Q4_K_M
- LM Studio
- Jan
- vLLM
How to use Maelstrome/lora-wave-session with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "Maelstrome/lora-wave-session" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Maelstrome/lora-wave-session", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/Maelstrome/lora-wave-session:Q4_K_M
- Ollama
How to use Maelstrome/lora-wave-session with Ollama:
ollama run hf.co/Maelstrome/lora-wave-session:Q4_K_M
- Unsloth Studio new
How to use Maelstrome/lora-wave-session with Unsloth Studio:
Install Unsloth Studio (macOS, Linux, WSL)
curl -fsSL https://unsloth.ai/install.sh | sh # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for Maelstrome/lora-wave-session to start chatting
Install Unsloth Studio (Windows)
irm https://unsloth.ai/install.ps1 | iex # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for Maelstrome/lora-wave-session to start chatting
Using HuggingFace Spaces for Unsloth
# No setup required # Open https://huggingface.co/spaces/unsloth/studio in your browser # Search for Maelstrome/lora-wave-session to start chatting
- Pi new
How to use Maelstrome/lora-wave-session with Pi:
Start the llama.cpp server
# Install llama.cpp: brew install llama.cpp # Start a local OpenAI-compatible server: llama-server -hf Maelstrome/lora-wave-session:Q4_K_M
Configure the model in Pi
# Install Pi: npm install -g @mariozechner/pi-coding-agent # Add to ~/.pi/agent/models.json: { "providers": { "llama-cpp": { "baseUrl": "http://localhost:8080/v1", "api": "openai-completions", "apiKey": "none", "models": [ { "id": "Maelstrome/lora-wave-session:Q4_K_M" } ] } } }Run Pi
# Start Pi in your project directory: pi
- Hermes Agent new
How to use Maelstrome/lora-wave-session with Hermes Agent:
Start the llama.cpp server
# Install llama.cpp: brew install llama.cpp # Start a local OpenAI-compatible server: llama-server -hf Maelstrome/lora-wave-session:Q4_K_M
Configure Hermes
# Install Hermes: curl -fsSL https://hermes-agent.nousresearch.com/install.sh | bash hermes setup # Point Hermes at the local server: hermes config set model.provider custom hermes config set model.base_url http://127.0.0.1:8080/v1 hermes config set model.default Maelstrome/lora-wave-session:Q4_K_M
Run Hermes
hermes
- Docker Model Runner
How to use Maelstrome/lora-wave-session with Docker Model Runner:
docker model run hf.co/Maelstrome/lora-wave-session:Q4_K_M
- Lemonade
How to use Maelstrome/lora-wave-session with Lemonade:
Pull the model
# Download Lemonade from https://lemonade-server.ai/ lemonade pull Maelstrome/lora-wave-session:Q4_K_M
Run and chat with the model
lemonade run user.lora-wave-session-Q4_K_M
List all available models
lemonade list
Install from WinGet (Windows)
winget install llama.cpp
# Start a local OpenAI-compatible server with a web UI:
llama-server -hf Maelstrome/lora-wave-session:Q4_K_M# Run inference directly in the terminal:
llama-cli -hf Maelstrome/lora-wave-session:Q4_K_MUse pre-built binary
# Download pre-built binary from:
# https://github.com/ggerganov/llama.cpp/releases# Start a local OpenAI-compatible server with a web UI:
./llama-server -hf Maelstrome/lora-wave-session:Q4_K_M# Run inference directly in the terminal:
./llama-cli -hf Maelstrome/lora-wave-session:Q4_K_MBuild from source code
git clone https://github.com/ggerganov/llama.cpp.git
cd llama.cpp
cmake -B build
cmake --build build -j --target llama-server llama-cli# Start a local OpenAI-compatible server with a web UI:
./build/bin/llama-server -hf Maelstrome/lora-wave-session:Q4_K_M# Run inference directly in the terminal:
./build/bin/llama-cli -hf Maelstrome/lora-wave-session:Q4_K_MUse Docker
docker model run hf.co/Maelstrome/lora-wave-session:Q4_K_Mlora-wave-session
A unified LoRA adapter on top of Gemma 4 E2B Instruct that handles three structured-output surfaces for the WAVE wellness/companion app:
check_in— multi-turn patient check-in with structured turn sequencingphase_narration— six-line patient-facing phase narrationreflection— reflection plan with a concrete next step
All three surfaces emit strict JSON, no markdown, no analysis voice, in patient-facing tone.
Repository layout
This repo is the single home for the r16 fine-tune. Everything lives here:
| Path | What | When to use |
|---|---|---|
adapter_model.safetensors + adapter_config.json (root) |
LoRA adapter (~100 MB) | peft.PeftModel.from_pretrained / Unsloth FastModel — pairs with the upstream unsloth/gemma-4-E2B-it base |
tokenizer.json, tokenizer_config.json, chat_template.jinja, processor_config.json (root) |
Gemma 4 tokenizer + chat template | required for any inference path |
gguf/ |
Q4_K_M GGUF (~3.27 GB, single file) + Ollama Modelfile | llama.cpp / Ollama / LM Studio |
The previously-published
Maelstrome/lora-wave-session-ggufsibling has been consolidated into this repo and deleted. The rank-32 variant has the same layout atMaelstrome/lora-wave-session-r32. Any external link to the old sibling URL will 404.Note on browser use: the GGUF here is a single 3.27 GB file, not pre-split. It works directly with llama.cpp / Ollama / LM Studio but will not load in wllama because it exceeds the 2 GB-per-file
ArrayBufferlimit. To run this r16 build in-browser, either split it first withllama-gguf-split --split-max-size 512Mor use the r32 sibling, which ships pre-split.
Sibling runs
This is the rank-16 / 3-epoch RTX 5080 training of the WAVE corpus. The rank-32 / 1-epoch A100 sibling lives at Maelstrome/lora-wave-session-r32 (same subdir layout: adapter at root, gguf/ subdir; plus mediapipe/ and report/). On the same frozen 428-row test split, r32 wins on every probability metric:
| rank-16 (this run) | rank-32 (sibling) | |
|---|---|---|
| LoRA completion NLL | 4.7149 | 4.5576 |
| LoRA perplexity | 111.59 | 95.35 |
| Paired wins vs base | 386 / 428 (90.2%) | 428 / 428 (100%) |
| Mean NLL Δ vs base | 0.327 nats | 0.508 nats |
| Sign-test p-value | 9.5 × 10⁻⁷¹ | 2.9 × 10⁻¹²⁹ |
Full head-to-head in Maelstrome/lora-wave-session-r32/report/ (the comparison + run-report markdown documents).
Provenance and intended use
Trained for the WAVE app, a wellness/reflection tool — not a medical device, not clinical decision support, not a substitute for professional advice. Use under the Gemma Terms of Use.
Try it
🌊 Interactive demo: Maelstrome/lora-wave-session-demo — Gradio Space with surface-specific example prompts.
Quickstart
PEFT + Unsloth (CUDA, server-side)
from unsloth import FastModel
model, tokenizer = FastModel.from_pretrained(
model_name="Maelstrome/lora-wave-session", # PEFT auto-loads base
max_seq_length=3072,
load_in_4bit=True,
)
Or with vanilla PEFT:
from peft import PeftModel
from transformers import AutoModelForCausalLM, AutoTokenizer
base = AutoModelForCausalLM.from_pretrained("unsloth/gemma-4-E2B-it")
tok = AutoTokenizer.from_pretrained("unsloth/gemma-4-E2B-it")
model = PeftModel.from_pretrained(base, "Maelstrome/lora-wave-session")
Ollama (via the GGUF in gguf/)
ollama create wave-r16 -f - <<EOF
FROM hf://Maelstrome/lora-wave-session/gguf/gemma-4-e2b-it.Q4_K_M.gguf
EOF
ollama run wave-r16
llama.cpp directly
llama-cli -hf Maelstrome/lora-wave-session:gguf/gemma-4-e2b-it.Q4_K_M.gguf --jinja
Example prompts
The model expects a system prompt establishing it as WAVE, plus a per-surface user prompt with <surface>, <patient_context>, and <task> blocks. Output is strict JSON.
phase_narration (six-line meditation)
User prompt:
<surface>phase_narration</surface>
<chunk>Number 5 of 5 - Close. Purpose: invite comparison to the start, normalize any outcome, and prepare for a final check-in.</chunk>
<patient_context>{"chunkNumber":5,"matType":"none","medicationStatus":"none","startingIntensityBand":"1-6","trigger":"unknown","usedSubstanceToday":false}</patient_context>
<task>Generate exactly 6 patient-facing narration lines. Return only strict JSON. Schema: {"lines":["...", ...]}</task>
Expected output (set max_new_tokens ≥ 224):
{"lines":["You've made it to the end of this practice.","Check in with your urge now — has anything shifted?","...","...","...","..."]}
reflection (post-session card)
<surface>reflection</surface>
<patient_context>{"durationSeconds":780,"endingIntensity":2,"intakeIntensity":7,"matType":"buprenorphine","medicationStatus":"on_time","sessionsCount":12,"trigger":"stress","usedSubstanceToday":false}</patient_context>
<task>Write the post-session reflection card. Return only strict JSON. Schema: {"insight":"...","journalPromptQuestion":"...","nextSteps":{"a":"...","b":"...","c":"...","d":"..."}}</task>
check_in (multi-turn)
<surface>check_in</surface>
<specialized_surface>lora-check-in-1</specialized_surface>
<patient_context>{"intakeIntensity":7,"matType":"buprenorphine","trigger":"stress"}</patient_context>
<task>Open turn 1: ask the patient to rate their current urge intensity 1-10. Schema: {"reply":"...","endConversation":null}</task>
Training
| Base | unsloth/gemma-4-E2B-it |
| Method | QLoRA (4-bit) via Unsloth FastModel |
| Adapter rank / alpha / dropout | 16 / 32 / 0 |
| Target modules | q/k/v/o + gate/up/down (language layers only) |
| Vision/audio layers | Frozen |
| Optimizer | adamw_8bit |
| LR | 2e-4, linear schedule |
| Warmup | 64 steps (~5%) |
| Weight decay | 0.001 |
| Max grad norm | 0.3 |
| Batch / grad-accum | 1 / 8 (effective 8) |
| Max sequence length | 3072 |
| Epochs | 3 (1,284 steps) |
| Chat template | gemma-4 (non-thinking, leading <bos> stripped) |
| Response masking | train_on_responses_only (Gemma 4 markers) |
| Hardware | Single RTX 5080 (16 GB) |
| Backend | Unsloth 2026.5.2 + Torch 2.10.0 + CUDA 12.8 |
Loss curve: 1.55 (step 1) → 0.76 (avg first 50) → 0.148 (steps 400-500) → 0.112 (last 100). Min 0.0146 at step 1,203. Smooth monotonic decrease, no divergence.
Evaluation
Held-out completion eval (n=428, full test split)
| Metric | Base Gemma 4 E2B | This adapter | Delta |
|---|---|---|---|
| Completion NLL | 4.9327 | 4.7149 | −0.218 |
| Completion perplexity | 138.76 | 111.59 | −27.16 |
| Paired wins (LoRA assigned higher prob to reference) | — | 386 / 428 (90.2%) | — |
| Mean per-example NLL Δ | — | 0.327 nats | 95% bootstrap CI [0.301, 0.352] |
| Median per-example NLL Δ | — | 0.285 nats | — |
| Sign-test p-value | — | 9.54 × 10⁻⁷¹ | overwhelming |
Surface coverage on test split: check_in 144, phase_narration 147, reflection 137.
Generation eval (n=8 sanity sample from held-out test)
| Gate | Pass rate |
|---|---|
| JSON validity | 100% (8/8) |
| Schema pass | 100% (8/8) |
| Safety pass | 100% |
| Medical-directive pass | 100% |
| Style / no-markdown / no-analysis-voice | 100% |
| Phase 6-line pass | 100% |
| Reflection next-step pass | 100% |
| Check-in turn sequence pass | 100% |
| Mean tokens/sec (Python QLoRA path) | 10.1 |
This was a small sanity-check sample. For a larger 60-example generation gate sweep on the rank-32 sibling, see Maelstrome/lora-wave-session-r32.
Known quirks
- Phase narration needs a generation budget of
max_new_tokens ≥ 224(256 recommended). The six-line JSON output runs to ~207 tokens; with a lower cap the closing]}gets truncated andJSON.parsefails.check_inis fine at 96;reflectionat 192.
Dataset
Maelstrome/lora-wave-session-dataset — 4,277 examples across three surfaces, stratified 80/10/10 by splitKey (seed 7).
Status mix: 62% synthetic_draft, 37% draft, 1% ready. No real PHI.
Limitations
- Wellness scope only. Do not use for medical diagnosis, crisis triage, or clinical decision support.
- Trained mostly on synthetic and draft-status data, not clinician-validated production data.
- Outputs are constrained-format JSON. The model is not optimized for open-ended chat.
- Training data is English; multilingual behavior was not measured.
- Phase narration needs a per-surface generation budget ≥ 224 tokens or it will be truncated.
License
Gemma Terms of Use. See https://ai.google.dev/gemma/terms.
Framework versions
- PEFT 0.19.1
- Unsloth 2026.5.2
- Transformers 5.5.0
- Torch 2.10.0+cu128
- Downloads last month
- 110
4-bit
Install from brew
# Start a local OpenAI-compatible server with a web UI: llama-server -hf Maelstrome/lora-wave-session:Q4_K_M# Run inference directly in the terminal: llama-cli -hf Maelstrome/lora-wave-session:Q4_K_M