ADHD EEG Detection โ Fine-Tuned Gemma 3 (QLoRA)
This model is a QLoRA fine-tuned version of google/gemma-3-1b-it for clinical EEG-based ADHD classification.
Training Details
| Parameter | Value |
|---|---|
| Base Model | google/gemma-3-1b-it |
| Method | QLoRA (4-bit NF4 quantization) |
| LoRA Rank | 16 |
| LoRA Alpha | 32 |
| Trainable Parameters | 13,045,760 (1.29%) |
| Training Samples | 201 (from 84 real EEG subjects) |
| Epochs | 5 |
| Total Steps | 130 |
| Learning Rate | 2e-4 (cosine) |
| Final Training Loss | 0.5635 |
| Final Validation Loss | 0.2611 |
| GPU | T4 (16GB) |
| Training Time | ~8 minutes |
Dataset
- 84 real paediatric EEG recordings (51 ADHD, 33 Control)
- 252 training samples (3 per subject):
- Full spectral classification
- Band-specific interpretation
- Frontal asymmetry analysis
EEG Features Used
- Mean Frontal Theta/Beta Ratio (TBR)
- Frontal Alpha Asymmetry (FAA)
- Frontal Theta Asymmetry (FTA)
- Frontal band powers (delta, theta, alpha, beta, gamma)
Validation Loss Trajectory
| Step | Training Loss | Validation Loss |
|---|---|---|
| 50 | 0.3021 | 0.2756 |
| 100 | 0.2660 | 0.2621 |
| 130 | 0.2754 | 0.2611 |
Usage
from transformers import AutoTokenizer, AutoModelForCausalLM
from peft import PeftModel
import torch
base_model = AutoModelForCausalLM.from_pretrained(
"google/gemma-3-1b-it",
torch_dtype=torch.float16,
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained("Fetrat/adhd-eeg-gemma3-qlora")
model = PeftModel.from_pretrained(base_model, "Fetrat/adhd-eeg-gemma3-qlora")
prompt = """<start_of_turn>user
You are a clinical EEG expert. Classify this subject as ADHD or Control.
EEG Spectral Biomarkers:
Mean frontal TBR: 3.7300
Frontal theta power (uV2/Hz): 45.2000
Frontal beta power (uV2/Hz): 12.1000
Frontal alpha power (uV2/Hz): 18.5000
Frontal Alpha Asymmetry (FAA): -0.1400
Frontal Theta Asymmetry (FTA): 0.0900<end_of_turn>
<start_of_turn>model
"""
inputs = tokenizer(prompt, return_tensors="pt").to("cuda")
with torch.no_grad():
outputs = model.generate(**inputs, max_new_tokens=300, temperature=0.1)
result = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(result.split("model\n")[-1].strip())
Citation
BCI Course Project 2026 โ EEG-Based ADHD Detection with Fine-Tuned Gemma 3 and QLoRA.
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐ Ask for provider support