Text Generation
PEFT
Safetensors
English
qlora
lora
structured-output

qwen3-4b-structured-conversion-lora-a100

This repository provides a LoRA adapter fine-tuned from Qwen/Qwen3-4B-Instruct-2507 using QLoRA (4-bit, Unsloth).

This repository contains LoRA adapter weights only. The base model must be loaded separately.

Training Objective

This adapter is trained to improve structured output accuracy (JSON / YAML / XML / TOML / CSV).

Loss is applied only to the final assistant output, while intermediate reasoning (Chain-of-Thought) is masked.

A system instruction is injected during training to enforce:

  • No reasoning tags
  • No tool calls
  • Structured output only

Training Configuration

  • Base model: Qwen/Qwen3-4B-Instruct-2507
  • Method: QLoRA (4-bit, Unsloth)
  • Max sequence length: 1024
  • Epochs: 2
  • Effective batch size: 32
  • Learning rate: 2e-05
  • LoRA: r=128, alpha=256, dropout=0.05
  • Precision: bfloat16 (A100)
  • Assistant-only loss with CoT masking
  • Format-conversion upsampling enabled
  • System-level instruction injected during training
  • Tool-call suppression via conditioning

Usage

from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel
import torch

base = "Qwen/Qwen3-4B-Instruct-2507"
adapter = "your_id/your-repo"

tokenizer = AutoTokenizer.from_pretrained(base)
model = AutoModelForCausalLM.from_pretrained(
    base,
    torch_dtype=torch.bfloat16,
    device_map="auto",
)
model = PeftModel.from_pretrained(model, adapter)

Sources & Terms (IMPORTANT)

Training Data

This adapter was trained on a merged structured conversion dataset:

  • u-10bei/structured_data_with_cot_dataset_512_v5
  • daichira/structured-5k-mix-sft
  • daichira/structured-hard-sft-4k

These datasets contain format conversion tasks including: CSV, JSON, YAML, XML, and TOML transformations.

Refer to each dataset page for detailed license information.

Dataset licenses: Refer to individual dataset pages on Hugging Face. This dataset is used and distributed under the terms of the MIT License. Compliance: Users must comply with the MIT license (including copyright notice) and the base model's original terms of use.

Downloads last month
-
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for AshleyQu0311/qwen3-4b-structured-conversion-lora-a100

Adapter
(1786)
this model

Datasets used to train AshleyQu0311/qwen3-4b-structured-conversion-lora-a100