LLM2026_DPO_SFT19_v17 (Silent Expert v17)

This model is the latest version of the Silent Expert, evolved from the high-performing SFT model makotonlo/LLM2026_SFT_finalv19_7B (v19) (Score: 0.767). It has been strictly fine-tuned using DPO with Beta 0.7 to completely eliminate conversational noise.

🎯 Optimization Goal (Absolute Silence)

Ensures the model outputs ONLY raw data (JSON, XML, YAML, CSV) without:

  • Preambles (e.g., "Certainly!", "To convert this...")
  • Markdown backticks (json ... )
  • Post-output explanations (e.g., "Note: ...")

πŸ›  Configuration (v17 Final Specs)

  • Base Intelligence: v19 (0.767 accuracy)
  • Method: DPO (Direct Preference Optimization)
  • Learning Rate: 1e-05
  • Beta: 0.7 (Maximum penalty for any non-data character)
  • Epochs: 3.0
  • LoRA Config: r=128, alpha=128

⚠️ Important: Usage Note

Please use the ChatML template for inference. The model is optimized to start its response directly with data-starting characters.

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for makotonlo/LLM2026_DPO_SFT19_v17

Base model

Qwen/Qwen2.5-7B
Adapter
(17)
this model