Hugging Face's logo Hugging Face
  • Models
  • Datasets
  • Spaces
  • Buckets new
  • Docs
  • Enterprise
  • Pricing
    • Website
      • Tasks
      • HuggingChat
      • Collections
      • Languages
      • Organizations
    • Community
      • Blog
      • Posts
      • Daily Papers
      • Learn
      • Discord
      • Forum
      • GitHub
    • Solutions
      • Team & Enterprise
      • Hugging Face PRO
      • Enterprise Support
      • Inference Providers
      • Inference Endpoints
      • Storage Buckets

  • Log In
  • Sign Up
wzhouad 's Collections
WPO

WPO

updated Aug 22, 2024

Models and datasets in paper "WPO: Enhancing RLHF with Weighted Preference Optimization".

Upvote
7

  • wzhouad/Llama3-Instruct-8B-WPO-FP

    Text Generation • 8B • Updated Jul 24, 2024 • 7

  • wzhouad/Llama3-Instruct-8B-WPO-HB

    Text Generation • 8B • Updated Aug 22, 2024 • 8 • 1

  • wzhouad/zephyr-7B-WPO-FP

    Text Generation • 7B • Updated Jul 24, 2024 • 12

  • wzhouad/zephyr-7B-WPO-HB

    Text Generation • 7B • Updated Aug 21, 2024 • 7

  • wzhouad/Llama3-Instruct-8B-WPO-HB-v2

    Text Generation • 8B • Updated Aug 22, 2024 • 9 • 5

  • wzhouad/gemma-2-9b-it-WPO-FP

    Text Generation • 9B • Updated Aug 8, 2024 • 7

  • wzhouad/gemma-2-9b-it-WPO-HB

    Text Generation • 9B • Updated Aug 21, 2024 • 19 • 34

  • wzhouad/zephyr-ultrafeedback-hybrid

    Viewer • Updated Aug 21, 2024 • 64.7k • 185 • 2

  • wzhouad/gemma-2-ultrafeedback-hybrid

    Viewer • Updated Aug 21, 2024 • 61.6k • 97 • 8

  • wzhouad/llama3-ultrafeedback-hybrid

    Viewer • Updated Aug 22, 2024 • 64.5k • 119 • 2

  • wzhouad/llama3-ultrafeedback-hybrid-v2

    Viewer • Updated Aug 22, 2024 • 64.5k • 66 • 5
Upvote
7
  • Collection guide
  • Browse collections
Company
TOS Privacy About Careers
Website
Models Datasets Spaces Pricing Docs