ta_s3c10_dpoc03

This is a fully merged model based on Qwen/Qwen3-4B-Instruct-2507, optimized for structured output generation (JSON / YAML / XML / TOML / CSV).

Merge Strategy

Task Arithmetic merge of SFT Stage 3 and DPO deltas on top of the base model:

merged = base + 1.0 * (sft_s3 - base) + 0.3 * (dpo - base)

Source Adapters

Usage

from transformers import AutoModelForCausalLM, AutoTokenizer
import torch

model_id = "DLNorb/ta_s3c10_dpoc03"

tokenizer = AutoTokenizer.from_pretrained(model_id, trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    torch_dtype=torch.bfloat16,
    device_map="auto",
    trust_remote_code=True,
)

messages = [{"role": "user", "content": "Convert this to JSON: name=Alice, age=30"}]
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
with torch.no_grad():
    output = model.generate(**inputs, max_new_tokens=2048, do_sample=False)
print(tokenizer.decode(output[0][inputs["input_ids"].shape[1]:], skip_special_tokens=True))

Sources & Terms (IMPORTANT)

Training data:

Compliance: Users must comply with each dataset's license (including copyright notice) and the base model's original terms of use.

Downloads last month
9
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for DLNorb/ta_s3c10_dpoc03

Finetuned
(878)
this model

Datasets used to train DLNorb/ta_s3c10_dpoc03