metadata
base_model: Qwen/Qwen3-4B-Instruct-2507
datasets:
- u-10bei/structured_data_with_cot_dataset_512_v2
language:
- en
license: apache-2.0
library_name: peft
pipeline_tag: text-generation
tags:
- qlora
- lora
- structured-output
lora_structeval_t_qwen3_4b_v6_try_a
This repository provides a LoRA adapter fine-tuned from Qwen/Qwen3-4B-Instruct-2507 using QLoRA (4-bit, Unsloth).
This repository contains LoRA adapter weights only. The base model must be loaded separately.
Training Configuration
- Base model: Qwen/Qwen3-4B-Instruct-2507
- Max sequence length: 1024
- Epochs: 2
- Learning rate: 2e-06
- LoRA: r=128, alpha=128, dropout=0.0
Notes
- Loss is applied to assistant response tokens.
Dataset Note
- Base dataset:
u-10bei/structured_data_with_cot_dataset_512_v2 - Training file used locally:
../dataset/structured_data_with_cot_dataset_512_v2_filtered.parquet - Filtering: removed 64 rows where
format=xmlandschema=api_specificationhad invalid XML inOutput:block. - Final train source rows: 3869 (from 3933)
Sources & Terms
Training data: u-10bei/structured_data_with_cot_dataset_512_v2 Please follow the dataset and base model licenses/terms.