lora_structeval_t_qwen3_4b_v5_parammatch
This repository provides a LoRA adapter fine-tuned from Qwen/Qwen3-4B-Instruct-2507 using QLoRA (4-bit, Unsloth).
This repository contains LoRA adapter weights only. The base model must be loaded separately.
Training Configuration
- Base model: Qwen/Qwen3-4B-Instruct-2507
- Max sequence length: 1024
- Epochs: 2
- Learning rate: 2e-06
- LoRA: r=128, alpha=128, dropout=0.0
Notes
- Loss is applied to assistant response tokens.
- Optional CoT masking is enabled/disabled by CONFIG.
Sources & Terms
Training data: u-10bei/structured_data_with_cot_dataset_512_v4 Please follow the dataset and base model licenses/terms.
- Downloads last month
- 13
Model tree for magoemu/lora_structeval_t_qwen3_4b_v5_parammatch
Base model
Qwen/Qwen3-4B-Instruct-2507