lora_structeval_t_qwen3_4b_v6_try_a

This repository provides a LoRA adapter fine-tuned from Qwen/Qwen3-4B-Instruct-2507 using QLoRA (4-bit, Unsloth).

This repository contains LoRA adapter weights only. The base model must be loaded separately.

Training Configuration

  • Base model: Qwen/Qwen3-4B-Instruct-2507
  • Max sequence length: 1024
  • Epochs: 2
  • Learning rate: 2e-06
  • LoRA: r=128, alpha=128, dropout=0.0

Notes

  • Loss is applied to assistant response tokens.

Dataset Note

  • Base dataset: u-10bei/structured_data_with_cot_dataset_512_v2
  • Training file used locally: ../dataset/structured_data_with_cot_dataset_512_v2_filtered.parquet
  • Filtering: removed 64 rows where format=xml and schema=api_specification had invalid XML in Output: block.
  • Final train source rows: 3869 (from 3933)

Sources & Terms

Training data: u-10bei/structured_data_with_cot_dataset_512_v2 Please follow the dataset and base model licenses/terms.

Downloads last month
9
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for magoemu/lora_structeval_t_qwen3_4b_v6_try_a

Adapter
(1710)
this model

Dataset used to train magoemu/lora_structeval_t_qwen3_4b_v6_try_a