Datasets:
Formats:
parquet
Languages:
English
Size:
10K - 100K
ArXiv:
Tags:
reinforcement-learning-from-human-feedback
reinforcement-learning
dialogue
conversational-ai
preference-alignment
License:
Improve dataset card: Add metadata, links, and sample usage
#5
by
nielsr
HF Staff
- opened
README.md
CHANGED
|
@@ -2,6 +2,14 @@
|
|
| 2 |
language:
|
| 3 |
- en
|
| 4 |
license: apache-2.0
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 5 |
dataset_info:
|
| 6 |
- config_name: feedback_decoder_binary
|
| 7 |
features:
|
|
@@ -172,4 +180,35 @@ configs:
|
|
| 172 |
path: turn/train-*
|
| 173 |
---
|
| 174 |
|
| 175 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 2 |
language:
|
| 3 |
- en
|
| 4 |
license: apache-2.0
|
| 5 |
+
task_categories:
|
| 6 |
+
- conversational
|
| 7 |
+
- text-generation
|
| 8 |
+
tags:
|
| 9 |
+
- reinforcement-learning-from-human-feedback
|
| 10 |
+
- dialogue
|
| 11 |
+
- conversational-ai
|
| 12 |
+
- preference-alignment
|
| 13 |
dataset_info:
|
| 14 |
- config_name: feedback_decoder_binary
|
| 15 |
features:
|
|
|
|
| 180 |
path: turn/train-*
|
| 181 |
---
|
| 182 |
|
| 183 |
+
# Retrospective Learning from Interactions (Respect) Dataset
|
| 184 |
+
|
| 185 |
+
This dataset supports **Retrospective Learning from Interactions (Respect)**, a paradigm introduced in the paper [The Era of Real-World Human Interaction: RL from User Conversations](https://huggingface.co/papers/2509.25137).
|
| 186 |
+
|
| 187 |
+
The paper introduces Reinforcement Learning from Human Interaction (RLHI), a novel approach that learns directly from in-the-wild user conversations. This enables continual model improvement and multifaceted alignment of conversational models, moving beyond traditional pre-annotated, expert-generated human feedback. The dataset facilitates two complementary methods: RLHI with User-Guided Rewrites and RLHI with User-Based Rewards, linking long-term user personas to turn-level preferences.
|
| 188 |
+
|
| 189 |
+
* **Paper**: [The Era of Real-World Human Interaction: RL from User Conversations](https://huggingface.co/papers/2509.25137)
|
| 190 |
+
* **Project Page**: [https://lil-lab.github.io/respect](https://lil-lab.github.io/respect)
|
| 191 |
+
* **GitHub Repository**: [https://github.com/lil-lab/respect](https://github.com/lil-lab/respect)
|
| 192 |
+
|
| 193 |
+
## Sample Usage
|
| 194 |
+
|
| 195 |
+
You can load the data and associated checkpoints as follows:
|
| 196 |
+
|
| 197 |
+
```python
|
| 198 |
+
from datasets import load_dataset
|
| 199 |
+
from transformers import Idefics2ForConditionalGeneration
|
| 200 |
+
from peft import PeftModel
|
| 201 |
+
import torch # Ensure torch is imported
|
| 202 |
+
|
| 203 |
+
# Download data
|
| 204 |
+
ds = load_dataset("lil-lab/respect", name="turn", split="train")
|
| 205 |
+
|
| 206 |
+
# Download checkpoints
|
| 207 |
+
checkpoint = "HuggingFaceM4/idefics2-8b"
|
| 208 |
+
model_id = 'lil-lab/respect'
|
| 209 |
+
|
| 210 |
+
model = Idefics2ForConditionalGeneration.from_pretrained(
|
| 211 |
+
checkpoint, torch_dtype=torch.bfloat16)
|
| 212 |
+
peft_model = PeftModel.from_pretrained(
|
| 213 |
+
model, model_id, adapter_name="r6_bp", revision="r6_bp")
|
| 214 |
+
```
|