CatoG
commited on
Update README with DPO Demo details
Browse filesAdded description and usage information for the DPO Demo application.
README.md
CHANGED
|
@@ -1,3 +1,10 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
---
|
| 2 |
title: DPO Demo
|
| 3 |
emoji: π
|
|
|
|
| 1 |
+
A test / demo application playground for DPO Preference Tuning on different LLM models.
|
| 2 |
+
Running on Huggingspace:
|
| 3 |
+
https://huggingface.co/spaces/CatoG/DPO_Demo
|
| 4 |
+
|
| 5 |
+
Allows for LLM model selection, preference tuning of LLM responses, model response tuning with LoRA and Direct Preference Optimization (DPO).
|
| 6 |
+
Tuned model / policies can be downloaded for further use.
|
| 7 |
+
|
| 8 |
---
|
| 9 |
title: DPO Demo
|
| 10 |
emoji: π
|