respect / README.md
nielsr's picture
nielsr HF Staff
Improve dataset card: Add metadata, links, and sample usage
68f86d5 verified
|
raw
history blame
5.25 kB
metadata
language:
  - en
license: apache-2.0
task_categories:
  - conversational
  - text-generation
tags:
  - reinforcement-learning-from-human-feedback
  - dialogue
  - conversational-ai
  - preference-alignment
dataset_info:
  - config_name: feedback_decoder_binary
    features:
      - name: predictions
        dtype: int64
      - name: labels
        dtype: int64
      - name: game_turn_id
        dtype: string
      - name: text
        dtype: string
    splits:
      - name: train
        num_bytes: 5115528
        num_examples: 7003
    download_size: 651037
    dataset_size: 5115528
  - config_name: feedback_decoder_ternary
    features:
      - name: predictions
        dtype: int64
      - name: labels
        dtype: int64
      - name: game_turn_id
        dtype: string
      - name: text
        dtype: string
    splits:
      - name: train
        num_bytes: 4936671
        num_examples: 7003
    download_size: 640213
    dataset_size: 4936671
  - config_name: human_eval
    features:
      - name: correctness
        dtype: string
      - name: feedback
        dtype: string
      - name: delta_clicks
        sequence: string
      - name: game_id
        dtype: string
      - name: turn_id
        dtype: int64
      - name: round
        dtype: int64
      - name: policy_name
        dtype: string
      - name: targets
        sequence: string
      - name: selected
        sequence: string
      - name: deselected
        sequence: string
      - name: context
        sequence: string
      - name: end
        dtype: int8
      - name: currently_selected
        sequence: string
    splits:
      - name: train
        num_bytes: 3190406
        num_examples: 8111
    download_size: 347537
    dataset_size: 3190406
  - config_name: interaction
    features:
      - name: game_id
        dtype: string
      - name: turn
        dtype: int64
      - name: end
        dtype: string
      - name: s comments
        dtype: string
      - name: speaker
        dtype: string
      - name: len
        dtype: string
      - name: clicks
        sequence:
          sequence: string
      - name: context
        sequence: string
      - name: targets
        sequence: string
      - name: chat
        sequence: string
      - name: dataset_alias
        dtype: string
      - name: policy_name
        dtype: string
      - name: date
        dtype: date32
      - name: round
        dtype: int64
    splits:
      - name: train
        num_bytes: 8775426
        num_examples: 7920
    download_size: 2120499
    dataset_size: 8775426
  - config_name: turn
    features:
      - name: chats
        sequence: string
      - name: clicks
        sequence:
          sequence: string
      - name: targets
        sequence: string
      - name: game_id
        dtype: string
      - name: end
        dtype: int8
      - name: context
        sequence: string
      - name: turn_id
        dtype: int8
      - name: currently_selected
        sequence: string
      - name: deselected
        sequence: string
      - name: selected
        sequence: string
      - name: chat_feedback
        dtype: string
      - name: game_turn_id
        dtype: string
      - name: prob_action
        dtype: float64
      - name: dataset_alias
        dtype: string
      - name: policy_name
        dtype: string
      - name: date
        dtype: date32
      - name: round
        dtype: int64
    splits:
      - name: train
        num_bytes: 45674609
        num_examples: 59431
    download_size: 6438021
    dataset_size: 45674609
configs:
  - config_name: feedback_decoder_binary
    data_files:
      - split: train
        path: feedback_decoder_binary/train-*
  - config_name: feedback_decoder_ternary
    data_files:
      - split: train
        path: feedback_decoder_ternary/train-*
  - config_name: human_eval
    data_files:
      - split: train
        path: human_eval/train-*
  - config_name: interaction
    data_files:
      - split: train
        path: interaction/train-*
  - config_name: turn
    data_files:
      - split: train
        path: turn/train-*

Retrospective Learning from Interactions (Respect) Dataset

This dataset supports Retrospective Learning from Interactions (Respect), a paradigm introduced in the paper The Era of Real-World Human Interaction: RL from User Conversations.

The paper introduces Reinforcement Learning from Human Interaction (RLHI), a novel approach that learns directly from in-the-wild user conversations. This enables continual model improvement and multifaceted alignment of conversational models, moving beyond traditional pre-annotated, expert-generated human feedback. The dataset facilitates two complementary methods: RLHI with User-Guided Rewrites and RLHI with User-Based Rewards, linking long-term user personas to turn-level preferences.

Sample Usage

You can load the data and associated checkpoints as follows:

from datasets import load_dataset
from transformers import Idefics2ForConditionalGeneration
from peft import PeftModel
import torch # Ensure torch is imported

# Download data
ds = load_dataset("lil-lab/respect", name="turn", split="train")

# Download checkpoints
checkpoint = "HuggingFaceM4/idefics2-8b"
model_id = 'lil-lab/respect'

model = Idefics2ForConditionalGeneration.from_pretrained(
    checkpoint, torch_dtype=torch.bfloat16)
peft_model = PeftModel.from_pretrained(
    model, model_id, adapter_name="r6_bp", revision="r6_bp")