LoRA_dataclean_2 / README.md
mark-22's picture
Update README.md
c88d3c4 verified
metadata
base_model: Qwen/Qwen3-4B-Instruct-2507
datasets:
  - u-10bei/structured_data_with_cot_dataset_512_v2
language:
  - en
license: apache-2.0
library_name: peft
pipeline_tag: text-generation
tags:
  - qlora
  - lora
  - structured-output
  - json
  - no-cot

Qwen3-4B Structured Output LoRA (No-CoT / Strict-JSON)

This repository provides a LoRA adapter fine-tuned from Qwen/Qwen3-4B-Instruct-2507 using QLoRA (4-bit, Unsloth).

This repository contains LoRA adapter weights only. The base model must be loaded separately.

Training Objective

This adapter is trained to improve structured output accuracy (JSON / YAML / XML / TOML / CSV) and strict format compliance.

Key Training Decisions:

  1. System Prompts Removed: The model is trained without system prompts to match the inference environment where they are unavailable.
  2. CoT Removed (Direct Output): Chain-of-Thought (reasoning steps) and "Output:" markers were physically removed from the training data.
  3. Assistant-Only Loss: The model is trained to output the structured data immediately after the user prompt, with loss applied only to the output.

Training Configuration

  • Base model: Qwen/Qwen3-4B-Instruct-2507
  • Method: QLoRA (4-bit)
  • Max sequence length: 512
  • Epochs: 1
  • Learning rate: 1e-06
  • LoRA: r=64, alpha=128

Usage

from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel
import torch

base_model_name = "Qwen/Qwen3-4B-Instruct-2507"
adapter_name = "your_id/your-repo" # Replace with your HF hub path

tokenizer = AutoTokenizer.from_pretrained(base_model_name)
model = AutoModelForCausalLM.from_pretrained(
    base_model_name,
    torch_dtype=torch.float16,
    device_map="auto",
)
model = PeftModel.from_pretrained(model, adapter_name)

# Inference Example
messages = [
    {"role": "user", "content": "Convert this text to JSON: ..."}
]
inputs = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True, return_tensors="pt").to("cuda")

# The model will output JSON immediately
outputs = model.generate(**inputs, max_new_tokens=512, temperature=0.1)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))

Sources & Terms (IMPORTANT)

Training data: u-10bei/structured_data_with_cot_dataset_512_v2

Dataset License: MIT License. This dataset is used and distributed under the terms of the MIT License. Compliance: Users must comply with the MIT license (including copyright notice) and the base model's original terms of use.