Datasets:
Update README.md
Browse files
README.md
CHANGED
|
@@ -18,9 +18,9 @@ configs:
|
|
| 18 |
- config_name: default
|
| 19 |
data_files:
|
| 20 |
- split: eval
|
| 21 |
-
path:
|
| 22 |
- split: train
|
| 23 |
-
path:
|
| 24 |
dataset_info:
|
| 25 |
features:
|
| 26 |
- name: image
|
|
@@ -37,27 +37,7 @@ dataset_info:
|
|
| 37 |
|
| 38 |
## Dataset Summary
|
| 39 |
|
| 40 |
-
ProcessBench is a process-aware benchmark for robotic manipulation understanding. This release build contains `57,892`
|
| 41 |
-
|
| 42 |
-
## Included in this local release build
|
| 43 |
-
|
| 44 |
-
- `data/processbench_eval.parquet` and `data/processbench_eval.jsonl`
|
| 45 |
-
- `data/processdata_sft.parquet` and `data/processdata_sft.jsonl`
|
| 46 |
-
- dataset-specific split files for all four sources
|
| 47 |
-
- `metadata/split_summary.json`, `metadata/eval_manifest.json`, `metadata/sft_manifest.json`
|
| 48 |
-
- `metadata/task_distribution.csv`
|
| 49 |
-
- `metadata/schema.md`, `metadata/reconstruction.md`, `metadata/prompt_templates.md`
|
| 50 |
-
- post-trained `ProcessEval-7B` results under `ProcessEval_results/`
|
| 51 |
-
- post-trained `ProcessEval-7B` model LoRA adapter weights and configuration under `ProcessEval-7B/`
|
| 52 |
-
- benchmark suite `ProcessData-58k` under `ProcessData-58k/`
|
| 53 |
-
|
| 54 |
-
|
| 55 |
-
## Sources
|
| 56 |
-
|
| 57 |
-
- `GM-100`
|
| 58 |
-
- `RH20T`
|
| 59 |
-
- `REASSEMBLE`
|
| 60 |
-
- `AIST-Bimanual`
|
| 61 |
|
| 62 |
## Task Families
|
| 63 |
|
|
@@ -74,11 +54,131 @@ ProcessBench is a process-aware benchmark for robotic manipulation understanding
|
|
| 74 |
- `T11`: Next Primitive Prediction
|
| 75 |
- `T12`: Primitive Chain Restoration
|
| 76 |
|
| 77 |
-
##
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 78 |
|
| 79 |
-
- `eval`: `9,051`
|
| 80 |
-
- `sft`: `48,841`
|
| 81 |
-
- split rule: strict episode / recording / scene isolation
|
| 82 |
|
| 83 |
## License and Terms
|
| 84 |
|
|
|
|
| 18 |
- config_name: default
|
| 19 |
data_files:
|
| 20 |
- split: eval
|
| 21 |
+
path: splits/processbench_eval.parquet
|
| 22 |
- split: train
|
| 23 |
+
path: splits/processbench_sft.parquet
|
| 24 |
dataset_info:
|
| 25 |
features:
|
| 26 |
- name: image
|
|
|
|
| 37 |
|
| 38 |
## Dataset Summary
|
| 39 |
|
| 40 |
+
ProcessBench is a process-aware benchmark for robotic manipulation understanding. This release build contains `57,892` QA rows: `9,051` Eval rows and `48,841` SFT rows across `12` task families. The split follows strict episode / recording / scene isolation. Upstream dataset sources include `GM-100`, `RH20T`, `REASSEMBLE`, and `AIST-Bimanual`.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 41 |
|
| 42 |
## Task Families
|
| 43 |
|
|
|
|
| 54 |
- `T11`: Next Primitive Prediction
|
| 55 |
- `T12`: Primitive Chain Restoration
|
| 56 |
|
| 57 |
+
## SFT-Eval Split Reconstruction
|
| 58 |
+
|
| 59 |
+
### Relevant Assets
|
| 60 |
+
|
| 61 |
+
- Public QA rows for `eval` and `sft`.
|
| 62 |
+
- Split manifests and split counts.
|
| 63 |
+
- Public release-relative references for each item.
|
| 64 |
+
|
| 65 |
+
### How to use the public references
|
| 66 |
+
|
| 67 |
+
Each row exposes three reconstruction-oriented fields:
|
| 68 |
+
|
| 69 |
+
- `visual_ref`
|
| 70 |
+
- `source_episode_ref`
|
| 71 |
+
- `reconstruction_key_json`
|
| 72 |
+
|
| 73 |
+
These are designed to be stable, path-free references. They are sufficient to map a released item back to the corresponding source episode or recording once you have obtained the upstream dataset under its original terms.
|
| 74 |
+
|
| 75 |
+
### Source-specific units
|
| 76 |
+
|
| 77 |
+
- `GM-100`: grouped by `(task_id, episode_id)`.
|
| 78 |
+
- `RH20T`: grouped by `recording_id`.
|
| 79 |
+
- `REASSEMBLE`: grouped by `recording_id`.
|
| 80 |
+
- `AIST-Bimanual`: grouped by `recording_id`.
|
| 81 |
+
|
| 82 |
+
### Expected workflow
|
| 83 |
+
|
| 84 |
+
1. Obtain upstream datasets under their original terms.
|
| 85 |
+
2. Use `source`, `source_task_id`, `source_unit_id`, `camera`, and `frame_indices_json` from the public release.
|
| 86 |
+
3. Reconstruct the required visual inputs with the source-specific evaluation scripts in the main codebase.
|
| 87 |
+
|
| 88 |
+
|
| 89 |
+
## Public QA fields Schema
|
| 90 |
+
|
| 91 |
+
- `item_id`: Stable public item identifier.
|
| 92 |
+
- `split`: `eval` or `sft`.
|
| 93 |
+
- `source`: `GM-100`, `RH20T`, `REASSEMBLE`, or `AIST-Bimanual`.
|
| 94 |
+
- `source_slug`: Short source slug used in release-relative references.
|
| 95 |
+
- `source_task_id`: Source-side task identifier without local paths.
|
| 96 |
+
- `source_unit_type`: Split-isolation unit type, e.g. `episode` or `recording`.
|
| 97 |
+
- `source_unit_id`: Public unit identifier without local paths.
|
| 98 |
+
- `task_id`: Canonical paper task ID `T1` ... `T12`.
|
| 99 |
+
- `task_name`: Canonical paper task name.
|
| 100 |
+
- `task_type_legacy`: Legacy engineering task name when it differs from paper naming.
|
| 101 |
+
- `input_type`: Public input format description.
|
| 102 |
+
- `question`: Public question text.
|
| 103 |
+
- `choice_A` ... `choice_F`: Flattened answer choices.
|
| 104 |
+
- `answer`: Public answer label in the release schema.
|
| 105 |
+
- `answer_text`: Public answer text mapped from `answer`.
|
| 106 |
+
- `num_choices`: Number of non-null answer choices.
|
| 107 |
+
- `num_frames`: Number of referenced frames.
|
| 108 |
+
- `frame_indices_json`: JSON-encoded frame indices.
|
| 109 |
+
- `display_labels_json`: JSON-encoded panel labels such as `["X", "Y"]` or `["X", "Z", "Y"]`.
|
| 110 |
+
- `camera`: Source camera identifier when available.
|
| 111 |
+
- `arm_type`: Source-side arm configuration tag when available.
|
| 112 |
+
- `visual_ref`: Release-relative visual reference string. This is not a local filesystem path.
|
| 113 |
+
- `source_episode_ref`: Release-relative source unit reference.
|
| 114 |
+
- `reconstruction_key_json`: Minimal JSON payload needed to reconstruct the sample with upstream data and source-specific scripts.
|
| 115 |
+
- `task_meta_in_source`: Whether source-side task metadata/context exists upstream.
|
| 116 |
+
- `task_meta_public`: Always `false` in this release build. Separate structured task-meta fields are intentionally withheld.
|
| 117 |
+
- `split_version`: Current split version.
|
| 118 |
+
- `split_group_id`: Source-side split group identifier.
|
| 119 |
+
- `builder_version`: Public builder version tag.
|
| 120 |
+
- `prompt_version`: Public prompt-serialization version tag.
|
| 121 |
+
- `sft_target`: SFT-only supervised target string. Present only in `processdata_sft.*`.
|
| 122 |
+
|
| 123 |
+
### Notes
|
| 124 |
+
|
| 125 |
+
- `T5` is equivalent to `T_progress`.
|
| 126 |
+
- `T8` is equivalent to `T_temporal`.
|
| 127 |
+
- `T9` is equivalent to `T_binary`.
|
| 128 |
+
- `T8` is normalized into six explicit permutation choices so the public table remains flat and Parquet-friendly.
|
| 129 |
+
- `T9` is normalized to `A/B` choices even when the source data originally used `X/Y`.
|
| 130 |
+
|
| 131 |
+
|
| 132 |
+
|
| 133 |
+
## Prompt Templates
|
| 134 |
+
|
| 135 |
+
This file documents the public prompt serialization used by the current release build. Separate source-side task-meta prompting is intentionally disabled in this public release.
|
| 136 |
+
|
| 137 |
+
### Standard multiple-choice template
|
| 138 |
+
|
| 139 |
+
```text
|
| 140 |
+
{question}
|
| 141 |
+
|
| 142 |
+
A: {choice_A}
|
| 143 |
+
B: {choice_B}
|
| 144 |
+
...
|
| 145 |
+
|
| 146 |
+
Choose exactly one option label.
|
| 147 |
+
|
| 148 |
+
Output protocol:
|
| 149 |
+
- You may include brief reasoning before the final answer.
|
| 150 |
+
- The final line must be exactly: <ANSWER>A</ANSWER>
|
| 151 |
+
- Do not output anything after </ANSWER>.
|
| 152 |
+
```
|
| 153 |
+
|
| 154 |
+
### `T8` Temporal Ordering template
|
| 155 |
+
|
| 156 |
+
```text
|
| 157 |
+
You are shown 3 frames from a robot manipulation task.
|
| 158 |
+
The frames are labeled X, Y, Z (these labels are arbitrary identifiers, not positional, and not time-ordered).
|
| 159 |
+
Determine the correct chronological order of these frames (from earliest to latest).
|
| 160 |
+
Choose exactly one 3-letter permutation, for example: YXZ means Y happened first, then X, then Z.
|
| 161 |
+
|
| 162 |
+
Output protocol:
|
| 163 |
+
- You may include brief reasoning before the final answer.
|
| 164 |
+
- The final line must be exactly: <ANSWER>XYZ</ANSWER>
|
| 165 |
+
- Do not output anything after </ANSWER>.
|
| 166 |
+
```
|
| 167 |
+
|
| 168 |
+
### `T9` Temporal Priority template
|
| 169 |
+
|
| 170 |
+
```text
|
| 171 |
+
A single comparison image shows two labeled robot-manipulation panels from the same episode.
|
| 172 |
+
The left-right placement of the panels and the labels are arbitrary identifiers and do not indicate temporal order.
|
| 173 |
+
Which labeled panel happened earlier in the real manipulation sequence, X or Y?
|
| 174 |
+
Choose exactly one label: X or Y.
|
| 175 |
+
|
| 176 |
+
Output protocol:
|
| 177 |
+
- You may include brief reasoning before the final answer.
|
| 178 |
+
- The final line must be exactly: <ANSWER>X</ANSWER>
|
| 179 |
+
- Do not output anything after </ANSWER>.
|
| 180 |
+
```
|
| 181 |
|
|
|
|
|
|
|
|
|
|
| 182 |
|
| 183 |
## License and Terms
|
| 184 |
|