Hugging Face's logo Hugging Face
  • Models
  • Datasets
  • Spaces
  • Docs
  • Enterprise
  • Pricing

  • Log In
  • Sign Up

thorirhrafn
/
gpt1B_DPO_model

PEFT
TensorBoard
Safetensors
trl
dpo
Generated from Trainer
Model card Files Files and versions
xet
Metrics Training metrics Community
gpt1B_DPO_model / reference
101 MB
  • 1 contributor
History: 1 commit
thorirhrafn's picture
thorirhrafn
End of training
cafa574 verified almost 2 years ago
  • adapter_config.json
    591 Bytes
    End of training almost 2 years ago
  • adapter_model.safetensors
    101 MB
    xet
    End of training almost 2 years ago