<qwen3-4b-structured-output-lora_3>

This repository provides a LoRA adapter fine-tuned from Qwen/Qwen3-4B-Instruct-2507 using QLoRA (4-bit, Unsloth).

This repository contains LoRA adapter weights only. The base model must be loaded separately.

Training Objective

This adapter is trained to improve structured output accuracy (JSON / YAML / XML / TOML / CSV).

Loss is applied only to the final assistant output, while intermediate reasoning (Chain-of-Thought) is masked.

Training Configuration

  • Base model: Qwen/Qwen3-4B-Instruct-2507
  • Method: QLoRA (4-bit)
  • Max sequence length: 1024
  • Epochs: 1
  • Learning rate: 2e-05
  • LoRA: r=32, alpha=64

Data Preprocessing

During supervised fine-tuning, preprocessing was applied only to assistant messages. Explanatory or reasoning-style text (e.g., "Explanation:", "Approach:", or free-form natural language descriptions) was excluded from the training targets, and loss was applied only to the final structured output. As a result, the model is trained to generate only the final structured output (such as JSON, YAML, or other task-specific formats), without additional explanations.

System and user messages were left unchanged. This design choice prioritizes output format consistency and correctness over verbose or step-by-step reasoning in the generated text.

Usage

from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel
import torch

base = "Qwen/Qwen3-4B-Instruct-2507"
adapter = "blue-hawk-2002/qwen3-4b-structured-output-lora_4"

tokenizer = AutoTokenizer.from_pretrained(base)
model = AutoModelForCausalLM.from_pretrained(
    base,
    torch_dtype=torch.float16,
    device_map="auto",
)
model = PeftModel.from_pretrained(model, adapter)

Sources & Terms (IMPORTANT)

Training data: u-10bei/structured_data_with_cot_dataset_512_v2

Dataset License: MIT License. This dataset is used and distributed under the terms of the MIT License. Compliance: Users must comply with the MIT license (including copyright notice) and the base model's original terms of use.

Downloads last month
17
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for blue-hawk-2002/qwen3-4b-structured-output-lora_4

Adapter
(5238)
this model

Dataset used to train blue-hawk-2002/qwen3-4b-structured-output-lora_4