SNA Learning โ Bloom's Taxonomy Stage 1 (Remember + Understand)
LoRA adapter for personalized CS/ML concept teaching using Bloom's Taxonomy scaffolding and Netflix-anchored memory palaces.
Training Details
- Base model: Qwen/Qwen2.5-7B-Instruct (4-bit via Unsloth)
- Stage: 1 of 3 (Remember + Understand levels)
- Method: SFT with LoRA (rank 32, alpha 32, dropout 0.05)
- Target modules: q_proj, k_proj, v_proj, o_proj, gate_proj, up_proj, down_proj
- Epochs: 3
- Learning rate: 1e-4 (cosine schedule, 4 warmup steps)
- Batch size: 1 ร 8 gradient accumulation
- Max sequence length: 1024
- Precision: bf16
- Hardware: Modal (GPU)
- Training time:
1094s (18 min) - Framework: TRL + PEFT + Unsloth
Metrics
| Metric | Value |
|---|---|
| Train loss (avg) | 0.5260 |
| Train loss (final step) | 0.3306 |
| Eval loss | 0.5476 |
| Grad norm (final) | 0.296 |
| Total steps | 318 |
Usage
from peft import PeftModel
from transformers import AutoModelForCausalLM, AutoTokenizer
base = AutoModelForCausalLM.from_pretrained("Qwen/Qwen2.5-7B-Instruct")
model = PeftModel.from_pretrained(base, "Dev-the-dev91/sna-bloom-stage1")
tokenizer = AutoTokenizer.from_pretrained("Dev-the-dev91/sna-bloom-stage1")
Bloom's Levels Covered
- Stage 1 (this): Remember + Understand (recall, explain, mnemonic, song)
- Stage 2: Apply + Analyze (scenario walkthrough, component decomposition)
- Stage 3: Evaluate + Create (planned)
- Downloads last month
- 13