SNA Learning โ Bloom's Taxonomy Stage 2 (Apply + Analyze)
LoRA adapter for personalized CS/ML concept teaching using Bloom's Taxonomy scaffolding and Netflix-anchored memory palaces.
Training Details
- Base model: Qwen/Qwen2.5-7B-Instruct (4-bit via Unsloth)
- Stage: 2 of 3 (Apply + Analyze levels), continuing from Stage 1 (Remember + Understand)
- Method: SFT with LoRA (rank 32, alpha 32, dropout 0.05)
- Target modules: q_proj, k_proj, v_proj, o_proj, gate_proj, up_proj, down_proj
- Epochs: 3
- Learning rate: 1e-4 (cosine schedule, 4 warmup steps)
- Batch size: 1 ร 8 gradient accumulation
- Max sequence length: 1024
- Precision: bf16
- Hardware: Modal (GPU)
- Training time:
789s (13 min) - Framework: TRL + PEFT + Unsloth
Metrics
| Metric | Value |
|---|---|
| Train loss (avg) | 0.5397 |
| Train loss (final step) | 0.4071 |
| Eval loss | 0.7310 |
| Grad norm (final) | 0.336 |
| Total steps | 165 |
Usage
from peft import PeftModel
from transformers import AutoModelForCausalLM, AutoTokenizer
base = AutoModelForCausalLM.from_pretrained("Qwen/Qwen2.5-7B-Instruct")
model = PeftModel.from_pretrained(base, "Dev-the-dev91/sna-bloom-stage2")
tokenizer = AutoTokenizer.from_pretrained("Dev-the-dev91/sna-bloom-stage2")
Bloom's Levels Covered
- Stage 1: Remember + Understand (recall, explain, mnemonic, song)
- Stage 2 (this): Apply + Analyze (scenario walkthrough, component decomposition)
- Stage 3: Evaluate + Create (planned)
- Downloads last month
- 8