LLM2026_DPO_SFT19_v12 (Silent Expert Version)

This model is a LoRA adapter evolved from the highly intelligent SFT model makotonlo/LLM2026_SFT_finalv19_7B. It has been fine-tuned using Direct Preference Optimization (DPO) to eliminate conversational chatter and enforce strict raw data output.

🎯 Optimization Goal (Strict No-Preamble)

The primary objective of this version is to ensure the model outputs ONLY raw data (JSON, XML, YAML, CSV) without any preambles, markdown backticks (```), or explanations, to comply with strict competition rules.

πŸ›  Training Configuration

  • Base Intelligence: makotonlo/LLM2026_SFT_finalv19_7B (v19)
  • Method: DPO (Direct Preference Optimization)
  • Learning Rate: 5e-06
  • Beta: 0.1 (Strong penalty for conversational responses)
  • Max Steps: 500
  • LoRA Config: r=64, alpha=64

⚠️ Important: Usage Note

When using this model, please use the same strict prompt template used during training to ensure the output starts directly with {, [, or <.

Framework versions

  • PEFT 0.13.2
Downloads last month
18
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Dataset used to train makotonlo/LLM2026_DPO_SFT19_v12