Hugging Face's logo Hugging Face
  • Models
  • Datasets
  • Spaces
  • Buckets new
  • Docs
  • Enterprise
  • Pricing
    • Website
      • Tasks
      • HuggingChat
      • Collections
      • Languages
      • Organizations
    • Community
      • Blog
      • Posts
      • Daily Papers
      • Learn
      • Discord
      • Forum
      • GitHub
    • Solutions
      • Team & Enterprise
      • Hugging Face PRO
      • Enterprise Support
      • Inference Providers
      • Inference Endpoints
      • Storage Buckets

  • Log In
  • Sign Up
zy0yang 's Collections
Alignment-DPO-line
MOE
long-context
toolkit
pretrain

Alignment-DPO-line

updated Jun 27, 2024
Upvote
-

  • sDPO: Don't Use Your Data All at Once

    Paper • 2403.19270 • Published Mar 28, 2024 • 41

  • Advancing LLM Reasoning Generalists with Preference Trees

    Paper • 2404.02078 • Published Apr 2, 2024 • 46

  • Learn Your Reference Model for Real Good Alignment

    Paper • 2404.09656 • Published Apr 15, 2024 • 90

  • mDPO: Conditional Preference Optimization for Multimodal Large Language Models

    Paper • 2406.11839 • Published Jun 17, 2024 • 40

  • Instruction Pre-Training: Language Models are Supervised Multitask Learners

    Paper • 2406.14491 • Published Jun 20, 2024 • 96
Upvote
-
  • Collection guide
  • Browse collections
Company
TOS Privacy About Careers
Website
Models Datasets Spaces Pricing Docs