Hugging Face's logo Hugging Face
  • Models
  • Datasets
  • Spaces
  • Buckets new
  • Docs
  • Enterprise
  • Pricing

  • Log In
  • Sign Up

Zeteng
/
peft_model_dpo

PEFT
Safetensors
Chinese
dpo
lora
chinese
Model card Files Files and versions
xet
Community

Instructions to use Zeteng/peft_model_dpo with libraries, inference providers, notebooks, and local apps. Follow these links to get started.

  • Libraries
  • PEFT

    How to use Zeteng/peft_model_dpo with PEFT:

    from peft import PeftModel
    from transformers import AutoModelForCausalLM
    
    base_model = AutoModelForCausalLM.from_pretrained("meta-llama/Llama-2-7b-chat-hf")
    model = PeftModel.from_pretrained(base_model, "Zeteng/peft_model_dpo")
  • Notebooks
  • Google Colab
  • Kaggle
peft_model_dpo
551 MB
Ctrl+K
Ctrl+K
  • 2 contributors
History: 4 commits
Lam810
Add YAML metadata to README
24daaa1 12 months ago
  • .gitattributes
    1.52 kB
    initial commit 12 months ago
  • README.md
    579 Bytes
    Add YAML metadata to README 12 months ago
  • adapter_config.json
    733 Bytes
    Initial commit with model files 12 months ago
  • adapter_model.safetensors
    551 MB
    xet
    Initial commit with model files 12 months ago
  • generation_config.json
    188 Bytes
    Initial commit with model files 12 months ago