LoL LoRAcle v3 (2-Q + fineweb mix)

Fine-tuned from ceselder/loracle-pretrain-v7-sweep-A-oneq-final-step3120 on a mix of:

Full training corpus: ceselder/lol-loracle-qa-v3-mix.

Training

  • 3256 train items, 80 eval (40 LoL holdout × 2 Q/A), 407 steps, lr=1e-5 linear, grad_accum=8
  • val_loss: 2.4669 (step 0, v7 baseline) -> 1.3391 (step 407, final)
  • Cross-LoRA: matched=1.2429, crossed=1.6747 (gap=0.4319) -- loracle conditions strongly on direction tokens
  • wandb: https://wandb.ai/adamkarvonen/lora-oracles/runs/ml9vulke
Downloads last month
-
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for ceselder/lol-loracle-v3

Finetuned
Qwen/Qwen3-14B
Adapter
(210)
this model