Datasets:
task_categories:
- TEXT_GENERATION
language:
- en
tags:
- supervised-fine-tuning
- reinforcement-learning
- sokoban
- general-points
- chain-of-thought
- instruction-following
- reasoning
- decision-making
- llm
dataset_info:
features:
- name: data_source
dtype: string
- name: prompt
list:
- name: content
dtype: string
- name: role
dtype: string
- name: ability
dtype: string
- name: reward_model
struct:
- name: ground_truth
sequence: int64
- name: style
dtype: string
- name: extra_info
struct:
- name: index
dtype: int64
- name: split
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 3590654
num_examples: 3982
- name: test
num_bytes: 1645357
num_examples: 1602
- name: test_env
num_bytes: 115290
num_examples: 100
download_size: 877296
dataset_size: 5351301
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
- split: test_env
path: data/test_env-*
Debunk the Myth of SFT Generalization Dataset
This dataset is associated with the paper Debunk the Myth of SFT Generalization.
The research presented in the paper challenges the common belief that supervised fine-tuning (SFT) primarily memorizes training data and lacks generalization, especially when compared to reinforcement learning (RL). Through systematic evaluation on decision-making benchmarks like Sokoban and General Points, the authors demonstrate that incorporating prompt diversity and Chain-of-Thought (CoT) supervision during SFT training can lead to strong generalization across unseen instruction variants and strictly harder tasks, often matching or surpassing RL baselines.
Code: https://github.com/XiaofengLin7/debunking-sft-generalization
Installation
This dataset is intended for use with the associated code repository. To set up the environment and dependencies:
Prerequisites
CUDA 12.2 & cuDNN 9.1.0 works, but official docs recommends CUDA >= 12.4 & cuDNN >= 9.8.0.
Setup
conda create -n debunk_sft python=3.10
conda activate debunk_sft
USE_MEGATRON=0 bash setup.sh
git submodule init
git submodule update
pip install -e thirdparty/verl --no-dependencies
pip install -e thirdparty/ragen --no-dependencies
pip install -e thirdparty/alfworld --no-dependencies
pip install -e thirdparty/trl --no-dependecies
Dataset Overview
This repository contains various datasets used in the research, categorized by task, method, diversity, and format. These datasets are part of a larger collection.
| Task | Method | Diversity | Format | Link |
|---|---|---|---|---|
| Sokoban | RL | non-diverse | — | 🤗 |
| Sokoban | RL | diverse | — | 🤗 |
| Sokoban | SFT | non-diverse | answer-only | 🤗 |
| Sokoban | SFT | diverse | answer-only | 🤗 |
| Sokoban | SFT | non-diverse | cot | 🤗 |
| Sokoban | SFT | diverse | cot | 🤗 |
| General Points | RL | non-diverse | — | 🤗 |
| General Points | RL | diverse | — | 🤗 |
| General Points | SFT | non-diverse | answer-only | 🤗 |
| General Points | SFT | diverse | answer-only | 🤗 |
| General Points | SFT | non-diverse | cot | 🤗 |
| General Points | SFT | diverse | cot | 🤗 |
Sample Usage
The following snippets from the GitHub repository demonstrate how to train models using this dataset with SFT (Supervised Fine-Tuning) or GRPO (Generative Reinforcement Policy Optimization).
Train your model with SFT
Specify your model and data beforehand.
For Sokoban:
bash debunk_sft/scripts/sokoban/sokoban_train_and_eval.sh
For General Points:
bash debunk_sft/scripts/gp_l/gp_l_train_and_eval.sh
Train your model with GRPO
Specify your model and data beforehand.
For Sokoban:
bash debunk_sft/scripts/sokoban/sokoban_grpo.sh
For General Points:
bash debunk_sft/scripts/gp_l/gp_l_grpo.sh
Citation
If you use this dataset in your research, please cite the associated paper:
@misc{lin2024debunkthemythofsftgeneralization,
title={Debunk the Myth of SFT Generalization},
author={Xiaofeng Lin and Yuandong Tian and Huazhe Xu},
year={2024},
eprint={2510.00237},
archivePrefix={arXiv},
primaryClass={cs.LG},
url={https://arxiv.org/abs/2510.00237},
}