YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

DPO Fine-tuning Result

  • Task Type: DpoTask
  • Base Model: mistralai/Mistral-7B-Instruct-v0.2
  • SHA256: b3ec5f969ac3a871bf1b45bedc25fc9dab729f8759f1ef076e81345c8d10ea30
  • Upload Time: 2025-07-12T23:40:00Z

This model was trained using Direct Preference Optimization on Subnet 56 (Gradients).

To use this adapter:

from transformers import AutoModelForCausalLM
from peft import PeftModel
base = AutoModelForCausalLM.from_pretrained("mistralai/Mistral-7B-Instruct-v0.2")
model = PeftModel.from_pretrained(base, "raniero/submission_final_auto_dpo_001")
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support