Qwen3-4B Structured Output Adapter (Custom Mix)
This is a LoRA adapter fine-tuned on a custom dataset optimized for structured output tasks (JSON/XML). The model was trained using Unsloth and QLoRA on a Tesla T4 GPU.
Training Details
- Base Model: Qwen/Qwen3-4B-Instruct-2507
- Training Data: A filtered and sampled combination (1,500 entries) of:
u-10bei/structured_data_with_cot_dataset_512_v2daichira/structured-hard-sft-4k
- Sequence Length: 1024
- Epochs: 2
- Rank: 64
- Alpha: 128
Usage
from peft import PeftModel
from transformers import AutoModelForCausalLM, AutoTokenizer
base_model = "Qwen/Qwen3-4B-Instruct-2507"
# Replace with your username/repo
adapter_name = "NTA2/qwen3-4b-structured-mix"
model = AutoModelForCausalLM.from_pretrained(base_model, load_in_4bit=True, device_map="auto")
model = PeftModel.from_pretrained(model, adapter_name)
tokenizer = AutoTokenizer.from_pretrained(base_model)
- Downloads last month
- -
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support
Model tree for NTA2/qwen3-4b-structured-mix
Base model
Qwen/Qwen3-4B-Instruct-2507