RewardPrediction / README.md
YijunShen's picture
update arXiv url
afe88f5 verified
metadata
configs:
  - config_name: alfworld
    data_files:
      - split: test
        path: data/alfworld/**/*.json
  - config_name: webshop
    data_files:
      - split: test
        path: data/webshop/**/*.json
  - config_name: blocksworld
    data_files:
      - split: test
        path: data/blocksworld/**/*.json
  - config_name: scienceworld
    data_files:
      - split: test
        path: data/scienceworld/**/*.json
  - config_name: textworld
    data_files:
      - split: test
        path: data/textworld/**/*.json
default_config_name: alfworld

🌟 RewardPrediction: A Fine-grained Step-wise Reward Prediction Benchmark

🌐 Website | πŸ’» GitHub | πŸ“„ arXiv

Project Teaser

RewardPrediction is a large-scale benchmark designed to evaluate fine-grained, step-wise reward prediction across five diverse text-based environments: AlfWorld, ScienceWorld, TextWorld, WebShop, and BlocksWorld. It comprises a total of 2,454 unique trajectories with dense reward annotations.

To prevent heuristic reward hacking, we structured the benchmark using a paired positive-negative strategy:

  • Positive Trajectories: Expert demonstrations augmented with random interaction steps at the boundaries.
  • Negative Trajectories: Failure trajectories generated via a random policy.

πŸ“₯ Load RewardPrediction Benchmark

Our benchmark can be loaded from the πŸ€— huggingface repo at YijunShen/RewardPrediction (reconstructing the alfworld/, webshop/ folders, etc.).

from huggingface_hub import snapshot_download
import shutil, os; from pathlib import Path

# [Optional] Your Hugging Face token (e.g., "hf_...") to avoid rate limits
HF_TOKEN = None 

# 1. Download the raw files from the repository
snapshot_download(
    repo_id="YijunShen/RewardPrediction", 
    repo_type="dataset", 
    local_dir="rewardprediction",
    token=HF_TOKEN
)

# 2. Unwrap 'data' folder to restore the original environment tree
d = Path("rewardprediction/data")
if d.exists():
    [shutil.move(str(i), "rewardprediction") for i in d.iterdir()]
    d.rmdir()

print(f"✨ Original structure restored at: {os.path.abspath('rewardprediction')}")

πŸ“„ Data Schema

Each row in the dataset represents a complete task trajectory. The data features a nested structure to efficiently store sequential interactions:

  • goal description (string): The natural language goal the agent needs to achieve for this specific trajectory.
  • trajectory (list): A nested sequence of interaction steps. Each step contains the following fields:
    • action (string): The specific action executed by the agent at this time step.
    • observation (string): The textual feedback/observation returned by the environment.
    • reward (dict): A dictionary containing fine-grained reward labels:
      • raw (float): The native, sparse environment reward (usually 1.0 for success, 0.0 otherwise).
      • shaped (float): The interpolated, step-wise ground-truth reward.
      • is_expert (boolean): Indicates whether this step is part of an expert demonstration.

✍️ Citation

If you find this dataset helpful for your research, please cite our work:

@misc{shen2026StateFactory,
      title={Reward Prediction with Factorized World States}, 
      author={Yijun Shen and Delong Chen and Xianming Hu and Jiaming Mi and Hongbo Zhao and Kai Zhang and Pascale Fung},
      year={2026},
      eprint={2603.09400},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2603.09400}, 
}