qwen3-4b-structeval-sft-merged

This model is a merged version of:

  • Base: Qwen/Qwen3-4B-Instruct-2507
  • Adapter: KKanno/qwen3-4b-structeval-sft-lora-2epoch

It was fine-tuned for structured output generation (JSON/YAML/XML/CSV).

Notes

  • This repository contains merged full weights (16-bit).
  • No LoRA adapter loading is required at inference time.

Usage

from transformers import AutoModelForCausalLM, AutoTokenizer
import torch

model_id = "KKanno/qwen3-4b-structeval-sft-merged"

tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    torch_dtype=torch.float16,
    device_map="auto"
)

prompt = "Generate JSON output."
inputs = tokenizer(prompt, return_tensors="pt").to("cuda")

outputs = model.generate(**inputs, max_new_tokens=256)
print(tokenizer.decode(outputs[0]))
Downloads last month
8
Safetensors
Model size
4B params
Tensor type
F16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for KKanno/qwen3-4b-structeval-sft-merged

Finetuned
(1210)
this model