Hugging Face
Models
Datasets
Spaces
Buckets
new
Docs
Enterprise
Pricing
Website
Tasks
HuggingChat
Collections
Languages
Organizations
Community
Blog
Posts
Daily Papers
Learn
Discord
Forum
GitHub
Solutions
Team & Enterprise
Hugging Face PRO
Enterprise Support
Inference Providers
Inference Endpoints
Storage Buckets
Log In
Sign Up
nebchi
's Collections
Fine Tuning Model
DPO Dataset
DPO Dataset
updated
Aug 8, 2024
한국어 DPO 데이터셋 모음
Upvote
-
maywell/ko_Ultrafeedback_binarized
Viewer
•
Updated
Nov 9, 2023
•
62k
•
218
•
37
kuotient/orca-math-korean-dpo-pairs
Viewer
•
Updated
Apr 5, 2024
•
193k
•
105
•
12
zzunyang/dpo_data
Viewer
•
Updated
Jan 26, 2024
•
126
•
8
•
1
SJ-Donald/orca-dpo-pairs-ko
Viewer
•
Updated
Jan 24, 2024
•
36k
•
135
•
10
Upvote
-
Share collection
View history
Collection guide
Browse collections