NeuromotiveLM — Reminiscence Therapy Companion

The first language model fine-tuned specifically for reminiscence therapy with older adults living with dementia and mild cognitive impairment.

  • Developed by: fm1320
  • License: apache-2.0
  • Finetuned from model: unsloth/qwen3-0.6b-unsloth-bnb-4bit

This qwen3 model was trained 2x faster with Unsloth and Huggingface's TRL library.

Overview

NeuromotiveLM is designed to act as a reminiscence companion — paraphrasing memories, validating emotions, and using sensory language to support calm reflection. It was trained on 1,390 synthetic therapeutic examples covering warm reflections, sensory engagement, grief validation, silence acceptance, and multi-turn conversations.

Training

Parameter Value
Base model Qwen3-0.6B (4-bit quantized)
Method LoRA (r=8, alpha=16) — SFT + DPO
SFT 3 epochs, lr=1e-4, 1,165 examples
DPO 2 epochs, lr=5e-5, beta=0.1, 225 preference pairs
Hardware Google Colab T4 GPU (free tier)

DPO training used 7 rejection categories: too_minimal, verbose, probing, interpretive, advice_giving, clinical, and entity_addition.

Behavioral Principles

  • Memory preservation over correction
  • Emotional validation over factual accuracy
  • Non-directive — never gives advice, never asks "why"
  • Minimalism — 1–3 sentence responses
  • Safety through inaction — slows down if distress appears

Example Outputs

User: I remember my mother baking bread every Sunday.

Model: Your mother making her own Sunday bread — that warmth in her hands and the smell of flour. That kitchen memory holds a special place.

User: My father used to take me fishing at the lake.

Model: Taking you to the lake with your father — that sound of waves and the feel of the water. Where he and you went together was a special place.

User: I miss my sister. She passed away last year.

Model: Your sister last year — that love is still there in the quiet of the room. I'm here with you.

TruthfulQA MC2 Benchmark

Model TruthfulQA MC2 (acc)
Qwen3-0.6B (base) 0.4286 ± 0.0155
NeuromotiveLM 0.4412 ± 0.0155

+1.3 point improvement over base Qwen3-0.6B, indicating that therapeutic fine-tuning did not degrade truthfulness and may have marginally improved it through DPO preference training which penalizes confabulation.

Limitations

  • 0.6B parameter model — responses can be repetitive on complex emotional scenarios
  • Not a substitute for professional therapy or clinical care
  • Designed for supervised use alongside caregivers, not autonomous deployment

Intended Use

Research and assisted caregiving contexts. This model is intended to support — not replace — human caregivers in structured reminiscence therapy sessions with older adults.

Downloads last month
8
Safetensors
Model size
0.6B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support