Add reinforcement-learning task category and improve documentation

#2
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +97 -95
README.md CHANGED
@@ -1,96 +1,98 @@
1
- ---
2
- configs:
3
- - config_name: alfworld
4
- data_files:
5
- - split: test
6
- path: "data/alfworld/**/*.json"
7
- - config_name: webshop
8
- data_files:
9
- - split: test
10
- path: "data/webshop/**/*.json"
11
- - config_name: blocksworld
12
- data_files:
13
- - split: test
14
- path: "data/blocksworld/**/*.json"
15
- - config_name: scienceworld
16
- data_files:
17
- - split: test
18
- path: "data/scienceworld/**/*.json"
19
- - config_name: textworld
20
- data_files:
21
- - split: test
22
- path: "data/textworld/**/*.json"
23
-
24
- default_config_name: alfworld
25
- ---
26
-
27
- # 🌟 RewardPrediction: A Fine-grained Step-wise Reward Prediction Benchmark
28
-
29
- [**🌐 Website**](https://statefactory.github.io) | [**πŸ’» GitHub**](https://github.com/yijunshens/statefactory) | [**πŸ“„ arXiv**](https://arxiv.org/abs/2603.09400)
30
-
31
- ![Project Teaser](assets/rewardprediction_demo.png)
32
-
33
- **RewardPrediction** is a large-scale benchmark designed to evaluate fine-grained, step-wise reward prediction across five diverse text-based environments: **AlfWorld**, **ScienceWorld**, **TextWorld**, **WebShop**, and **BlocksWorld**. It comprises a total of 2,454 unique trajectories with dense reward annotations.
34
-
35
- To prevent heuristic reward hacking, we structured the benchmark using a **paired positive-negative strategy**:
36
-
37
- * **Positive Trajectories**: Expert demonstrations augmented with random interaction steps at the boundaries.
38
- * **Negative Trajectories**: Failure trajectories generated via a random policy.
39
-
40
- ---
41
-
42
- ## πŸ“₯ Load RewardPrediction Benchmark
43
-
44
- Our benchmark can be loaded from the πŸ€— huggingface repo at [YijunShen/RewardPrediction](https://huggingface.co/datasets/YijunShen/RewardPrediction) (reconstructing the `alfworld/`, `webshop/` folders, etc.).
45
-
46
- ```python
47
- from huggingface_hub import snapshot_download
48
- import shutil, os; from pathlib import Path
49
-
50
- # [Optional] Your Hugging Face token (e.g., "hf_...") to avoid rate limits
51
- HF_TOKEN = None
52
-
53
- # 1. Download the raw files from the repository
54
- snapshot_download(
55
- repo_id="YijunShen/RewardPrediction",
56
- repo_type="dataset",
57
- local_dir="rewardprediction",
58
- token=HF_TOKEN
59
- )
60
-
61
- # 2. Unwrap 'data' folder to restore the original environment tree
62
- d = Path("rewardprediction/data")
63
- if d.exists():
64
- [shutil.move(str(i), "rewardprediction") for i in d.iterdir()]
65
- d.rmdir()
66
-
67
- print(f"✨ Original structure restored at: {os.path.abspath('rewardprediction')}")
68
- ```
69
-
70
- ## πŸ“„ Data Schema
71
-
72
- Each row in the dataset represents a **complete task trajectory**. The data features a nested structure to efficiently store sequential interactions:
73
-
74
- * **goal description** (string): The natural language goal the agent needs to achieve for this specific trajectory.
75
- * **trajectory** (list): A nested sequence of interaction steps. Each step contains the following fields:
76
- * **action** (string): The specific action executed by the agent at this time step.
77
- * **observation** (string): The textual feedback/observation returned by the environment.
78
- * **reward** (dict): A dictionary containing fine-grained reward labels:
79
- * `raw` (float): The native, sparse environment reward (usually 1.0 for success, 0.0 otherwise).
80
- * `shaped` (float): The interpolated, step-wise ground-truth reward.
81
- * `is_expert` (boolean): Indicates whether this step is part of an expert demonstration.
82
-
83
- ## ✍️ Citation
84
-
85
- If you find this dataset helpful for your research, please cite our work:
86
- ```
87
- @misc{shen2026StateFactory,
88
- title={Reward Prediction with Factorized World States},
89
- author={Yijun Shen and Delong Chen and Xianming Hu and Jiaming Mi and Hongbo Zhao and Kai Zhang and Pascale Fung},
90
- year={2026},
91
- eprint={2603.09400},
92
- archivePrefix={arXiv},
93
- primaryClass={cs.CL},
94
- url={https://arxiv.org/abs/2603.09400},
95
- }
 
 
96
  ```
 
1
+ ---
2
+ configs:
3
+ - config_name: alfworld
4
+ data_files:
5
+ - split: test
6
+ path: data/alfworld/**/*.json
7
+ - config_name: webshop
8
+ data_files:
9
+ - split: test
10
+ path: data/webshop/**/*.json
11
+ - config_name: blocksworld
12
+ data_files:
13
+ - split: test
14
+ path: data/blocksworld/**/*.json
15
+ - config_name: scienceworld
16
+ data_files:
17
+ - split: test
18
+ path: data/scienceworld/**/*.json
19
+ - config_name: textworld
20
+ data_files:
21
+ - split: test
22
+ path: data/textworld/**/*.json
23
+ default_config_name: alfworld
24
+ task_categories:
25
+ - reinforcement-learning
26
+ ---
27
+
28
+ # 🌟 RewardPrediction: A Fine-grained Step-wise Reward Prediction Benchmark
29
+
30
+ [**🌐 Website**](https://statefactory.github.io) | [**πŸ’» GitHub**](https://github.com/yijunshens/statefactory) | [**πŸ“„ Paper**](https://huggingface.co/papers/2603.09400)
31
+
32
+ **RewardPrediction** is a large-scale benchmark designed to evaluate fine-grained, step-wise reward prediction across five diverse text-based environments: **AlfWorld**, **ScienceWorld**, **TextWorld**, **WebShop**, and **BlocksWorld**. It comprises a total of 2,454 unique trajectories with dense reward annotations.
33
+
34
+ This dataset was introduced in the paper [Reward Prediction with Factorized World States](https://huggingface.co/papers/2603.09400).
35
+
36
+ To prevent heuristic reward hacking, the benchmark uses a **paired positive-negative strategy**:
37
+
38
+ * **Positive Trajectories**: Expert demonstrations augmented with random interaction steps at the boundaries.
39
+ * **Negative Trajectories**: Failure trajectories generated via a random policy.
40
+
41
+ ---
42
+
43
+ ## πŸ“₯ Sample Usage
44
+
45
+ The following code snippet demonstrates how to download the raw files and restore the original environment tree structure as intended by the authors:
46
+
47
+ ```python
48
+ from huggingface_hub import snapshot_download
49
+ import shutil, os; from pathlib import Path
50
+
51
+ # [Optional] Your Hugging Face token (e.g., "hf_...") to avoid rate limits
52
+ HF_TOKEN = None
53
+
54
+ # 1. Download the raw files from the repository
55
+ snapshot_download(
56
+ repo_id="YijunShen/RewardPrediction",
57
+ repo_type="dataset",
58
+ local_dir="rewardprediction",
59
+ token=HF_TOKEN
60
+ )
61
+
62
+ # 2. Unwrap 'data' folder to restore the original environment tree
63
+ d = Path("rewardprediction/data")
64
+ if d.exists():
65
+ [shutil.move(str(i), "rewardprediction") for i in d.iterdir()]
66
+ d.rmdir()
67
+
68
+ print(f"✨ Original structure restored at: {os.path.abspath('rewardprediction')}")
69
+ ```
70
+
71
+ ## πŸ“„ Data Schema
72
+
73
+ Each row in the dataset represents a **complete task trajectory**. The data features a nested structure to store sequential interactions:
74
+
75
+ * **goal_description** (string): The natural language goal the agent needs to achieve for this specific trajectory.
76
+ * **trajectory** (list): A nested sequence of interaction steps. Each step contains the following fields:
77
+ * **action** (string): The specific action executed by the agent at this time step.
78
+ * **observation** (string): The textual feedback/observation returned by the environment.
79
+ * **reward** (dict): A dictionary containing fine-grained reward labels:
80
+ * `raw` (float): The native, sparse environment reward (usually 1.0 for success, 0.0 otherwise).
81
+ * `shaped` (float): The interpolated, step-wise ground-truth reward.
82
+ * `is_expert` (boolean): Indicates whether this step is part of an expert demonstration.
83
+
84
+ ## ✍️ Citation
85
+
86
+ If you find this dataset helpful for your research, please cite:
87
+
88
+ ```bibtex
89
+ @misc{shen2026StateFactory,
90
+ title={Reward Prediction with Factorized World States},
91
+ author={Yijun Shen and Delong Chen and Xianming Hu and Jiaming Mi and Hongbo Zhao and Kai Zhang and Pascale Fung},
92
+ year={2026},
93
+ eprint={2603.09400},
94
+ archivePrefix={arXiv},
95
+ primaryClass={cs.CL},
96
+ url={https://arxiv.org/abs/2603.09400},
97
+ }
98
  ```