nielsr HF Staff commited on
Commit
68f86d5
·
verified ·
1 Parent(s): 167e1e5

Improve dataset card: Add metadata, links, and sample usage

Browse files

This pull request significantly improves the dataset card for the `lil-lab/respect` dataset.

Key changes include:
- **Metadata Update**: Added `task_categories` (`conversational`, `text-generation`) and `tags` (`reinforcement-learning-from-human-feedback`, `dialogue`, `conversational-ai`, `preference-alignment`) to enhance discoverability and provide better context for the dataset.
- **Content Overhaul**: The existing minimal content has been replaced with a comprehensive overview:
- A descriptive title for the dataset.
- An introductory paragraph summarizing the paper's core contributions and the dataset's purpose.
- Updated the paper link to the official Hugging Face Papers page: https://huggingface.co/papers/2509.25137.
- Added links to the project page: https://lil-lab.github.io/respect.
- Added links to the GitHub repository: https://github.com/lil-lab/respect.
- Included a "Sample Usage" section with Python code snippets, directly sourced from the GitHub README, demonstrating how to load both the dataset and associated model checkpoints, thereby facilitating easier access and utilization for researchers.

These additions make the dataset card much more informative, discoverable, and user-friendly for the Hugging Face community.

Files changed (1) hide show
  1. README.md +40 -1
README.md CHANGED
@@ -2,6 +2,14 @@
2
  language:
3
  - en
4
  license: apache-2.0
 
 
 
 
 
 
 
 
5
  dataset_info:
6
  - config_name: feedback_decoder_binary
7
  features:
@@ -172,4 +180,35 @@ configs:
172
  path: turn/train-*
173
  ---
174
 
175
- <https://arxiv.org/abs/2410.13852>
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2
  language:
3
  - en
4
  license: apache-2.0
5
+ task_categories:
6
+ - conversational
7
+ - text-generation
8
+ tags:
9
+ - reinforcement-learning-from-human-feedback
10
+ - dialogue
11
+ - conversational-ai
12
+ - preference-alignment
13
  dataset_info:
14
  - config_name: feedback_decoder_binary
15
  features:
 
180
  path: turn/train-*
181
  ---
182
 
183
+ # Retrospective Learning from Interactions (Respect) Dataset
184
+
185
+ This dataset supports **Retrospective Learning from Interactions (Respect)**, a paradigm introduced in the paper [The Era of Real-World Human Interaction: RL from User Conversations](https://huggingface.co/papers/2509.25137).
186
+
187
+ The paper introduces Reinforcement Learning from Human Interaction (RLHI), a novel approach that learns directly from in-the-wild user conversations. This enables continual model improvement and multifaceted alignment of conversational models, moving beyond traditional pre-annotated, expert-generated human feedback. The dataset facilitates two complementary methods: RLHI with User-Guided Rewrites and RLHI with User-Based Rewards, linking long-term user personas to turn-level preferences.
188
+
189
+ * **Paper**: [The Era of Real-World Human Interaction: RL from User Conversations](https://huggingface.co/papers/2509.25137)
190
+ * **Project Page**: [https://lil-lab.github.io/respect](https://lil-lab.github.io/respect)
191
+ * **GitHub Repository**: [https://github.com/lil-lab/respect](https://github.com/lil-lab/respect)
192
+
193
+ ## Sample Usage
194
+
195
+ You can load the data and associated checkpoints as follows:
196
+
197
+ ```python
198
+ from datasets import load_dataset
199
+ from transformers import Idefics2ForConditionalGeneration
200
+ from peft import PeftModel
201
+ import torch # Ensure torch is imported
202
+
203
+ # Download data
204
+ ds = load_dataset("lil-lab/respect", name="turn", split="train")
205
+
206
+ # Download checkpoints
207
+ checkpoint = "HuggingFaceM4/idefics2-8b"
208
+ model_id = 'lil-lab/respect'
209
+
210
+ model = Idefics2ForConditionalGeneration.from_pretrained(
211
+ checkpoint, torch_dtype=torch.bfloat16)
212
+ peft_model = PeftModel.from_pretrained(
213
+ model, model_id, adapter_name="r6_bp", revision="r6_bp")
214
+ ```