Text Generation
PEFT
Safetensors
English
qlora
lora
structured-output

qwen3-4b-structured-output-lora_20260205v1

This repository provides a LoRA adapter fine-tuned from Qwen/Qwen3-4B-Instruct-2507 using QLoRA (4-bit, Unsloth).

This repository contains LoRA adapter weights only. The base model must be loaded separately.

Training Objective

This adapter is trained to improve structured output accuracy (JSON / YAML / XML / TOML / CSV).

Loss is applied only to the final assistant output, while intermediate reasoning (Chain-of-Thought) is masked.

Training Configuration

  • Base model: Qwen/Qwen3-4B-Instruct-2507
  • Method: QLoRA (4-bit)
  • Max sequence length: 512
  • Epochs: 1
  • Learning rate: 1e-06
  • LoRA: r=64, alpha=128

Usage

from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel
import torch

base = "Qwen/Qwen3-4B-Instruct-2507"
adapter = "your_id/your-repo"

tokenizer = AutoTokenizer.from_pretrained(base)
model = AutoModelForCausalLM.from_pretrained(
    base,
    torch_dtype=torch.float16,
    device_map="auto",
)
model = PeftModel.from_pretrained(model, adapter)

Sources & Terms (IMPORTANT)

Training data: daichira/structured-5k-mix-sft, daichira/structured-3k-mix-sft, daichira/structured-5k-mix-sft

Dataset License: Creative Commons Attribution 4.0 International (CC BY 4.0) license. Compliance: Users must comply with the Creative Commons Attribution 4.0 International (CC BY 4.0) license and the base model's original terms of use.

Downloads last month
10
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Chi-Kawa/Chi-Kawa5

Adapter
(1646)
this model

Datasets used to train Chi-Kawa/Chi-Kawa5