Stage 2 only: tool prediction on personality LoRA, 3 epochs, batch=2 f1bc339 verified lokegud commited on Mar 24
Fix: disable eval (OOM at step 40 — model uses 77/80GB). Train loss converging fine. 467c863 verified lokegud commited on Mar 24
Fix: DataCollatorForSeq2Seq for proper label padding, removes eval crash at step 40 4e9b79b verified lokegud commited on Mar 24
Fix: extract text tokenizer from VLM Processor to avoid image_utils error 9dd474e verified lokegud commited on Mar 24
Fix: pre-tokenize with input_ids+labels instead of relying on SFTTrainer auto-tokenization c3edb01 verified lokegud commited on Mar 24
Fix: use messages format directly instead of text column for SFTTrainer 5.x compat b34273c verified lokegud commited on Mar 24
Fix: remove_unused_columns=False for transformers 5.x, warmup_steps instead of warmup_ratio 5846d17 verified lokegud commited on Mar 24