Xiaofeng77's picture
Improve dataset card: Add paper, code, task categories, tags, and sample usage (#1)
08aa99f verified
metadata
dataset_info:
  features:
    - name: data_source
      dtype: string
    - name: extra_info
      struct:
        - name: cards
          list: string
        - name: display_cards
          list: int64
        - name: index
          dtype: int64
        - name: solution
          dtype: string
        - name: target
          dtype: int64
        - name: treat_face_cards_as_10
          dtype: bool
    - name: question
      dtype: string
    - name: answer
      dtype: string
  splits:
    - name: train
      num_bytes: 7567303
      num_examples: 10000
  download_size: 887045
  dataset_size: 7567303
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*
task_categories:
  - text-generation
tags:
  - reinforcement-learning
language:
  - en

Debunk the Myth of SFT Generalization Dataset

This dataset is associated with the paper "Debunk the Myth of SFT Generalization". The paper challenges the prevailing view that supervised fine-tuning (SFT) primarily memorizes training data and fails to generalize, in contrast to reinforcement learning (RL). It demonstrates that SFT can generalize as well as—or better than—RL when trained with appropriate data, achieved through prompt diversity and Chain-of-Thought (CoT) supervision on decision-making benchmarks like Sokoban and General Points.

Code reproducing the results in the paper can be found at: https://github.com/XiaofengLin7/debunking-sft-generalization.

Dataset

This dataset is part of a collection used in the paper's experiments, providing various configurations for evaluating SFT and RL models. The table below outlines these specific datasets:

Task Method Diversity Format Link
Sokoban RL non-diverse 🤗
Sokoban RL diverse 🤗
Sokoban SFT non-diverse answer-only 🤗
Sokoban SFT diverse answer-only 🤗
Sokoban SFT non-diverse cot 🤗
Sokoban SFT diverse cot 🤗
General Points RL non-diverse 🤗
General Points RL diverse 🤗
General Points SFT non-diverse answer-only 🤗
General Points SFT diverse answer-only 🤗
General Points SFT non-diverse cot 🤗
General Points SFT diverse cot 🤗

The dataset contains question and answer fields, along with extra_info that provides specific details about the task, such as cards, solution, and target for the General Points benchmark.

Sample Usage

To train your model using the SFT or GRPO methods with the associated code and similar data configurations, you can use the following bash commands from the GitHub repository. Ensure your model and data paths are correctly specified beforehand.

Train your model with SFT

For Sokoban:

bash debunk_sft/scripts/sokoban/sokoban_train_and_eval.sh

For General Points:

bash debunk_sft/scripts/gp_l/gp_l_train_and_eval.sh

Train your model with GRPO

For Sokoban:

bash debunk_sft/scripts/sokoban/sokoban_grpo.sh

For General Points:

bash debunk_sft/scripts/gp_l/gp_l_grpo.sh