Hugging Face
Models
Datasets
Spaces
Buckets
new
Docs
Enterprise
Pricing
Website
Tasks
HuggingChat
Collections
Languages
Organizations
Community
Blog
Posts
Daily Papers
Learn
Discord
Forum
GitHub
Solutions
Team & Enterprise
Hugging Face PRO
Enterprise Support
Inference Providers
Inference Endpoints
Storage Buckets
Log In
Sign Up
trl-lib
's Collections
Preference datasets
Stepwise supervision datasets
Prompt-completion datasets
Prompt-only datasets
Unpaired preference datasets
Comparing DPO with IPO and KTO
Online-DPO
Online-DPO
updated
Mar 2
Upvote
1
trl-lib/pythia-1b-deduped-tldr-online-dpo
1B
•
Updated
Aug 2, 2024
•
2
trl-lib/pythia-1b-deduped-tldr-sft
1B
•
Updated
Aug 2, 2024
•
213
trl-lib/pythia-6.9b-deduped-tldr-online-dpo
7B
•
Updated
Aug 2, 2024
•
2
trl-lib/pythia-2.8b-deduped-tldr-sft
Updated
Aug 2, 2024
•
3
trl-lib/pythia-2.8b-deduped-tldr-rm
Updated
Aug 2, 2024
•
3
trl-lib/pythia-6.9b-deduped-tldr-sft
Updated
Aug 2, 2024
•
4
trl-lib/pythia-6.9b-deduped-tldr-rm
Updated
Aug 2, 2024
•
5
trl-lib/pythia-1b-deduped-tldr-offline-dpo
Text Generation
•
1B
•
Updated
Aug 2, 2024
•
4
trl-lib/pythia-2.8b-deduped-tldr-offline-dpo
Text Generation
•
3B
•
Updated
Aug 2, 2024
•
5
trl-lib/pythia-6.9b-deduped-tldr-offline-dpo
Text Generation
•
7B
•
Updated
Aug 2, 2024
•
7
trl-lib/pythia-2.8b-deduped-tldr-online-dpo
Text Generation
•
3B
•
Updated
Aug 2, 2024
•
6
Upvote
1
Share collection
View history
Collection guide
Browse collections