lin_s3w50_dpow50
This is a fully merged model based on Qwen/Qwen3-4B-Instruct-2507, optimized for structured output generation (JSON / YAML / XML / TOML / CSV).
Merge Strategy
Linear interpolation (50:50) of two merged models:
merged = 0.5 * sft_s3_merged + 0.5 * dpo_merged
Source Adapters
- SFT Stage 3: DLNorb/lora_structeval_t_qwen3_4b_v2_stage3 (checkpoint-100)
- QLoRA r=32, alpha=64, trained on u-10bei/structured_data_with_cot_dataset_512_v5
- CoT-masked SFT: loss applied only to final structured output
- DPO: DLNorb/dpo_lora_model_stage3 (checkpoint-505)
- QLoRA r=8, alpha=16, trained on u-10bei/dpo-dataset-qwen-cot
- Applied on top of SFT Stage 3
Usage
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
model_id = "DLNorb/lin_s3w50_dpow50"
tokenizer = AutoTokenizer.from_pretrained(model_id, trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype=torch.bfloat16,
device_map="auto",
trust_remote_code=True,
)
messages = [{"role": "user", "content": "Convert this to JSON: name=Alice, age=30"}]
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
with torch.no_grad():
output = model.generate(**inputs, max_new_tokens=2048, do_sample=False)
print(tokenizer.decode(output[0][inputs["input_ids"].shape[1]:], skip_special_tokens=True))
Sources & Terms (IMPORTANT)
Training data:
- https://huggingface.co/datasets/u-10bei/structured_data_with_cot_dataset_512_v5: MIT License
- https://huggingface.co/datasets/u-10bei/dpo-dataset-qwen-cot: MIT License
Compliance: Users must comply with each dataset's license (including copyright notice) and the base model's original terms of use.
- Downloads last month
- 31
Model tree for DLNorb/lin_s3w50_dpow50
Base model
Qwen/Qwen3-4B-Instruct-2507