Active filters: dpo
CyberNative/Code_Vulnerability_Security_DPO
Viewer
• Updated • 4.66k • 1.71k
• 158
OusiaResearch/Aureth-Corpus-Hermes4.3-Generated
Viewer
• Updated • 654k • 688
• 13
argilla/OpenHermesPreferences
Viewer
• Updated • 989k • 632
• 213
llamafactory/DPO-En-Zh-20k
Viewer
• Updated • 20k • 404
• 100
inclusionAI/Ling-Coder-DPO
Viewer
• Updated • 253k • 218
• 12
yusufbaykaloglu/Turkish-STEM-DPO-Dataset
Viewer
• Updated • 16.2k • 39
• 2
OptiRefine/python-optimization-dpo-sample
Viewer
• Updated • 4 • 58
• 2
namakoo/idfu-verified-code
Preview
• Updated • 100 • 279
• 1
manaf1234/synthetic_leather
Viewer
• Updated • 1.84k • 529
• 1
d0rj/synthetic-instruct-gptj-pairwise-ru
Viewer
• Updated • 33.1k • 55
• 2
d0rj/rlhf-reward-datasets-ru
Viewer
• Updated • 81.4k • 33
• 4
Viewer
• Updated • 125k • 32
• 2
d0rj/oasst1_pairwise_rlhf_reward-ru
Viewer
• Updated • 18.9k • 17
• 1
xzuyn/mmlu-auxilary-train-dpo
Viewer
• Updated • 101k • 36
• 2
AlexHung29629/stack-exchange-paired-128K
Viewer
• Updated • 128k • 11
• 1
flyingfishinwater/ultrafeedback_clean
Viewer
• Updated • 175k • 22
• 2
efederici/alpaca-vs-alpaca-orpo-dpo
Viewer
• Updated • 49.2k • 101
• 7
Viewer
• Updated • 183k • 23
• 1
mlabonne/chatml_dpo_pairs
Viewer
• Updated • 12.9k • 51
• 55
Viewer
• Updated • 183k • 14
• 6
argilla/ultrafeedback-binarized-preferences-cleaned
Viewer
• Updated • 60.9k • 10k
• 162
ThWu/dpo_highest_n_random
Viewer
• Updated • 182k • 13
• 2
BramVanroy/orca_dpo_pairs_dutch
Viewer
• Updated • 11k • 86
• 6
argilla/ultrafeedback-multi-binarized-preferences-cleaned
Viewer
• Updated • 158k • 135
• 7
Viewer
• Updated • 2.42k • 33
• 10
Viewer
• Updated • 15.3k • 60
• 19
HuggingFaceH4/orca_dpo_pairs
Viewer
• Updated • 12.9k • 9.76k
• 30
5CD-AI/Vietnamese-Intel-orca_dpo_pairs-gg-translated
Viewer
• Updated • 12.9k • 44
• 35
jondurbin/gutenberg-dpo-v0.1
Viewer
• Updated • 918 • 758
• 162
Viewer
• Updated • 17.5k • 130
• 52