Wave1-HigherLR-Best-JSON-Clean

This repository provides a LoRA adapter fine-tuned from Qwen/Qwen3-4B-Instruct-2507 using SFT with LoRA (r=64, alpha=128, lr=2e-6).

This repository contains LoRA adapter weights only. The base model must be loaded separately.

Training Objective

This adapter is trained to improve structured output accuracy (JSON / YAML / XML / TOML / CSV).

Training Configuration

  • Method: SFT with LoRA (r=64, alpha=128, lr=2e-6)
  • Base model: Qwen/Qwen3-4B-Instruct-2507
  • Dataset: u-10bei/structured_data_with_cot_dataset_512_v2

Usage

from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel
import torch

base = "Qwen/Qwen3-4B-Instruct-2507"
adapter = "your_id/your-repo"

tokenizer = AutoTokenizer.from_pretrained(base)
model = AutoModelForCausalLM.from_pretrained(
    base,
    torch_dtype=torch.float16,
    device_map="auto",
)
model = PeftModel.from_pretrained(model, adapter)

Sources & Terms

Training data: u-10bei/structured_data_with_cot_dataset_512_v2

Dataset License: MIT License. Compliance: Users must comply with the MIT license and the base model's terms.

Downloads last month
19
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for a1273352/llm-compe-wave1-higher-lr-clean

Adapter
(1491)
this model

Dataset used to train a1273352/llm-compe-wave1-higher-lr-clean