LLM2026_DPO_SFT19_v17 (Silent Expert v17)
This model is the latest version of the Silent Expert, evolved from the high-performing SFT model makotonlo/LLM2026_SFT_finalv19_7B (v19) (Score: 0.767). It has been strictly fine-tuned using DPO with Beta 0.7 to completely eliminate conversational noise.
π― Optimization Goal (Absolute Silence)
Ensures the model outputs ONLY raw data (JSON, XML, YAML, CSV) without:
- Preambles (e.g., "Certainly!", "To convert this...")
- Markdown backticks (
json ...) - Post-output explanations (e.g., "Note: ...")
π Configuration (v17 Final Specs)
- Base Intelligence: v19 (0.767 accuracy)
- Method: DPO (Direct Preference Optimization)
- Learning Rate: 1e-05
- Beta: 0.7 (Maximum penalty for any non-data character)
- Epochs: 3.0
- LoRA Config: r=128, alpha=128
β οΈ Important: Usage Note
Please use the ChatML template for inference. The model is optimized to start its response directly with data-starting characters.
Inference Providers NEW
This model isn't deployed by any Inference Provider. π Ask for provider support
Model tree for makotonlo/LLM2026_DPO_SFT19_v17
Base model
Qwen/Qwen2.5-7B
Finetuned
Qwen/Qwen2.5-7B-Instruct
Quantized
unsloth/Qwen2.5-7B-Instruct-bnb-4bit