LLM2026_DPO_SFT19_v14 (Silent Expert Version)
This model is a LoRA adapter evolved from the high-performing SFT model makotonlo/LLM2026_SFT_finalv19_7B (v19). It has been fine-tuned using Direct Preference Optimization (DPO) with parameters inherited from the successful V7 configuration to eliminate conversational chatter and enforce strict raw data output.
π― Optimization Goal (Strict No-Preamble)
The primary objective of this version is to ensure the model outputs ONLY raw data (JSON, XML, YAML, CSV) without any preambles (e.g., "Certainly!"), markdown backticks (```), or explanations.
π Training Configuration (V7 Success Specs)
- Base Intelligence: makotonlo/LLM2026_SFT_finalv19_7B (v19)
- Method: DPO (Direct Preference Optimization)
- Learning Rate: 1e-05 (Stronger correction for 7B architecture)
- Beta: 0.5 (Maximum penalty for conversational fillers and formatting noise)
- Epochs: 3.0 (Thorough enforcement of the 'Silent' persona)
- LoRA Config: r=128 (inherited) + DPO tuning
β οΈ Important: Usage Note
When using this model, please use the ChatML prompt template. The model is optimized to start its response directly with characters like {, [, or <.
Framework versions
- PEFT 0.13.2
- Unsloth 2025.12.7
Inference Providers NEW
This model isn't deployed by any Inference Provider. π Ask for provider support
Model tree for makotonlo/LLM2026_DPO_SFT19_v14
Base model
Qwen/Qwen2.5-7B
Finetuned
Qwen/Qwen2.5-7B-Instruct
Quantized
unsloth/Qwen2.5-7B-Instruct-bnb-4bit