qwen3-p5js-physics-lora
LoRA adapter for Qwen/Qwen3-0.6B, tuned for generating educational p5.js animations from natural-language prompts.
Primary objective: produce runnable, classroom-friendly JavaScript sketches (setup() + draw()) that explain K-12 science concepts visually.
Model Details
- Base model:
Qwen/Qwen3-0.6B - Fine-tuning method: LoRA + supervised fine-tuning (SFT)
- Domain: p5.js animation code generation for science education
- Intended language: English prompts and code comments
- Adapter repo:
mr-dee/qwen3-p5js-physics-lora - Source project: https://github.com/dylanler/qwen3-p5js-physics
Intended Use
Direct use:
- Generate p5.js teaching demos for topics like gravity, circuits, optics, astronomy, and earth science.
- Bootstrap lesson visuals that teachers/students can edit locally.
Downstream use:
- Interactive educational apps, coding workshops, and science visualization demos.
Out of scope:
- Safety-critical software.
- Scientific simulation requiring high-precision numerical correctness.
- Unreviewed classroom deployment without human validation.
Training Data
Dataset summary:
- 1,036 instruction/code examples.
- 124 unique K-12 science topics.
- Synthetic dataset generated via parallel agent workflows and validated into JSONL format.
Each example contains:
instructiontopicgrade_levelp5js_code(full runnable sketch)
Data style constraints emphasized:
- 600x400 canvas
setup()anddraw()structure- labels/annotations for explanation
- interactive and animated behavior
Training Procedure
Hardware and runtime:
- 4x NVIDIA A100-SXM4-80GB
- ~171.9 seconds total training runtime
- multi-GPU training via
accelerate
LoRA config:
r=64lora_alpha=128lora_dropout=0.05- target modules:
q_proj,k_proj,v_proj,o_proj,gate_proj,up_proj,down_proj
Optimization config:
- epochs: 3
- per-device batch size: 4
- gradient accumulation: 2
- effective batch size: 32 (4 GPUs)
- learning rate:
2e-4 - scheduler: cosine, warmup ratio 0.05
- weight decay: 0.01
- max sequence length: 2048
- precision: bf16
- optimizer: AdamW
Training/Eval Snapshot
- Train loss (logged):
0.9090 -> 0.4950(step 10 to step 90) - Train run average loss (
train_metrics.json):0.5917 - Eval loss (step 50):
0.6164 - Reported token accuracy during run: up to ~85.6%
These are training-time indicators, not full benchmark performance across external test sets.
Quick Start
Transformers + PEFT
import torch
from peft import PeftModel
from transformers import AutoModelForCausalLM, AutoTokenizer
BASE = "Qwen/Qwen3-0.6B"
ADAPTER = "mr-dee/qwen3-p5js-physics-lora"
tokenizer = AutoTokenizer.from_pretrained(BASE, trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained(
BASE,
torch_dtype=torch.bfloat16,
device_map="auto",
trust_remote_code=True,
)
model = PeftModel.from_pretrained(model, ADAPTER)
messages = [
{
"role": "system",
"content": (
"You are a p5.js animation expert for K-12 physics education. "
"Return complete, runnable JavaScript."
),
},
{
"role": "user",
"content": "Create an interactive p5.js sketch that teaches projectile motion.",
},
]
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
outputs = model.generate(
**inputs,
max_new_tokens=1024,
temperature=0.4,
top_p=0.9,
)
print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[1]:], skip_special_tokens=True))
vLLM Serving (LoRA)
vllm serve Qwen/Qwen3-0.6B \
--enable-lora \
--lora-modules p5js=mr-dee/qwen3-p5js-physics-lora \
--tensor-parallel-size 2 \
--max-model-len 2048
Then query:
curl http://127.0.0.1:8000/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"model": "p5js",
"messages": [
{"role":"system","content":"You are a p5.js animation expert for K-12 physics education."},
{"role":"user","content":"Create an animation showing wave interference with labeled nodes and antinodes."}
],
"max_tokens": 800,
"temperature": 0.4
}'
Important:
- keep
prompt_tokens + max_tokens <= max_model_len - otherwise vLLM returns HTTP 400 validation errors
Limitations and Risks
- Output is not guaranteed bug-free JavaScript; review before use.
- Physical explanations may be simplified or occasionally incorrect.
- Performance drops for domains outside training distribution.
- Small base model can struggle with very long or highly complex simulations.
Responsible Use
- Use with human review in educational settings.
- Validate scientific correctness before presenting to students.
- Sandbox or lint generated code before running in production applications.
Framework Versions
- PEFT: 0.18.1
- Transformers: 4.57.6
- TRL: 0.27.2
- PyTorch: 2.9.1
- Accelerate: 1.12+
Citation
If you use this adapter, cite:
@misc{qwen3_p5js_physics_lora_2026,
title = {qwen3-p5js-physics-lora},
author = {mr-dee},
year = {2026},
url = {https://huggingface.co/mr-dee/qwen3-p5js-physics-lora}
}
- Downloads last month
- 22
Model tree for mr-dee/qwen3-p5js-physics-lora
Evaluation results
- Final logged train loss (step 90) on Internal synthetic p5.js science datasetself-reported0.495
- Train run average loss on Internal synthetic p5.js science datasetself-reported0.592
- Eval loss (step 50) on Internal synthetic p5.js science datasetself-reported0.616