Qwen3-4B-DPO-Silent-Format

This model is a fine-tuned version of Qwen/Qwen3-4B-Instruct-2507 using Direct Preference Optimization (DPO).

🎯 Training Objective

Unlike typical CoT (Chain-of-Thought) tuning, this model is optimized to suppress verbose reasoning and enforce strict structured output compliance.

The goal is to prevent parse errors by outputting data (JSON/TOML) directly without preamble (e.g., removing "Approach:" or "Here is the code").

Training Configuration

  • Base model: Qwen/Qwen3-4B-Instruct-2507
  • Method: DPO (Direct Preference Optimization)
  • Epochs: 1
  • Learning rate: 1e-6
  • Beta: 0.05 (Strict penalty for deviating from chosen data)
  • Max sequence length: 2048
  • LoRA Config: r=16, alpha=32 (merged into base)

Usage

Since this is a merged model, you can use it directly with transformers.

from transformers import AutoModelForCausalLM, AutoTokenizer
import torch

model_id = "your_id/your-repo-name"

tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    torch_dtype=torch.float16,
    device_map="auto"
)

# Test inference: The model should respond directly without "Approach:"
prompt = "Output a JSON for a user named Alice."
inputs = tokenizer.apply_chat_template([{ "role": "user", "content": prompt }], tokenize=True, add_generation_prompt=True, return_tensors="pt").to("cuda")
outputs = model.generate(**inputs, max_new_tokens=512)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))

Sources & License (IMPORTANT)

  • Training Data: [u-10bei/dpo-dataset-qwen-cot]
  • License: MIT License. (As per dataset terms).
  • Compliance: Users must follow the original base model's license terms.
Downloads last month
26
Safetensors
Model size
4B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for naru0411/LLM-competition-DPO

Finetuned
(583)
this model

Dataset used to train naru0411/LLM-competition-DPO