sna-bloom-stage2 / README.md
Dev-the-dev91's picture
Upload folder using huggingface_hub
aa71027 verified
metadata
base_model: Qwen/Qwen2.5-7B-Instruct
library_name: peft
pipeline_tag: text-generation
license: apache-2.0
tags:
  - lora
  - sft
  - transformers
  - trl
  - unsloth
  - bloom-taxonomy
  - sna-learning
datasets:
  - Dev-the-dev91/sna-regal-training-data

SNA Learning — Bloom's Taxonomy Stage 2 (Apply + Analyze)

LoRA adapter for personalized CS/ML concept teaching using Bloom's Taxonomy scaffolding and Netflix-anchored memory palaces.

Training Details

  • Base model: Qwen/Qwen2.5-7B-Instruct (4-bit via Unsloth)
  • Stage: 2 of 3 (Apply + Analyze levels), continuing from Stage 1 (Remember + Understand)
  • Method: SFT with LoRA (rank 32, alpha 32, dropout 0.05)
  • Target modules: q_proj, k_proj, v_proj, o_proj, gate_proj, up_proj, down_proj
  • Epochs: 3
  • Learning rate: 1e-4 (cosine schedule, 4 warmup steps)
  • Batch size: 1 × 8 gradient accumulation
  • Max sequence length: 1024
  • Precision: bf16
  • Hardware: Modal (GPU)
  • Training time: 789s (13 min)
  • Framework: TRL + PEFT + Unsloth

Metrics

Metric Value
Train loss (avg) 0.5397
Train loss (final step) 0.4071
Eval loss 0.7310
Grad norm (final) 0.336
Total steps 165

Usage

from peft import PeftModel
from transformers import AutoModelForCausalLM, AutoTokenizer

base = AutoModelForCausalLM.from_pretrained("Qwen/Qwen2.5-7B-Instruct")
model = PeftModel.from_pretrained(base, "Dev-the-dev91/sna-bloom-stage2")
tokenizer = AutoTokenizer.from_pretrained("Dev-the-dev91/sna-bloom-stage2")

Bloom's Levels Covered

  • Stage 1: Remember + Understand (recall, explain, mnemonic, song)
  • Stage 2 (this): Apply + Analyze (scenario walkthrough, component decomposition)
  • Stage 3: Evaluate + Create (planned)