| --- |
| license: other |
| language: |
| - en |
| pretty_name: RoboProcessBench |
| task_categories: |
| - visual-question-answering |
| tags: |
| - robotics |
| - embodied-ai |
| - benchmark |
| - vision-language-models |
| - process-understanding |
| - manipulation |
| size_categories: |
| - 10K<n<100K |
| configs: |
| - config_name: default |
| data_files: |
| - split: train |
| path: splits/processdata_sft.parquet |
| - split: eval |
| path: splits/processdata_eval.parquet |
| --- |
| |
| # RoboProcessBench |
|
|
| ## Dataset Summary |
|
|
| RoboProcessBench is a process-aware benchmark for vision-language robotic manipulation understanding. It evaluates whether VLMs can infer how a manipulation execution unfolds, including phase, contact, motion, bimanual coordination, primitive-local progress, temporal order, outcome, and primitive-level transitions. |
|
|
| This release contains 57,892 QA rows: 48,841 SFT rows and 9,051 evaluation rows across 12 task families and 260 manipulation tasks. The split follows strict episode / recording / scene isolation. The benchmark is derived from GM-100, RH20T, REASSEMBLE, and AIST-Bimanual. |
|
|
| ## Task Families |
|
|
| RoboProcessBench contains 12 diagnostic question families: |
|
|
| - **T1 Phase Recognition:** identify the current coarse process phase. |
| - **T2 Contact Detection:** determine whether task-relevant contact has occurred. |
| - **T3 Motion Direction Prediction:** infer the dominant motion direction from short temporal context. |
| - **T4 Bimanual Coordination State:** identify the current coordination state of two arms. |
| - **T5 Primitive-local Progress:** estimate progress within the current local manipulation step. |
| - **T6 Motion State Recognition:** distinguish actively moving from stationary states. |
| - **T7 Operation Outcome Prediction:** predict eventual success or failure from partial execution evidence. |
| - **T8 Temporal Ordering:** reconstruct the chronological order of shuffled observations. |
| - **T9 Temporal Priority Prediction:** decide which of two observations occurred earlier. |
| - **T10 Current Primitive Recognition:** identify the current low-level primitive. |
| - **T11 Next Primitive Prediction:** infer the next primitive from local process context. |
| - **T12 Primitive Chain Restoration:** restore a masked primitive in a local primitive chain. |
|
|
|
|
| ## Repository Contents |
|
|
| The release is organized as a compact, reviewable benchmark package: |
|
|
| ```text |
| RoboProcessBench/ |
| ├── splits/ # SFT and Eval QA entries, split summaries |
| │ ├── processdata_sft.jsonl |
| │ ├── processdata_sft.parquet |
| │ ├── sft_manifest.jsonl |
| │ ├── processdata_eval.jsonl |
| │ ├── processdata_eval.parquet |
| │ └── eval-manifest.jsonl |
| │ |
| ├── metadata/ # statistics, schema, licenses, prompt templates, reconstruction notes |
| │ ├── split_summary.json |
| │ ├── task_distribution.csv |
| │ ├── asset_licenses.csv |
| │ ├── schema.md |
| │ ├── prompt_templates.md |
| │ └── reconstruction.md |
| │ |
| ├── examples/ # rendered representative benchmark cards |
| │ └── task_cards/ |
| │ ├── T1_phase_recognition.png |
| │ ├── T2_contact_detection.png |
| │ ├── ... |
| │ └── T12_primitive_chain_restoration.png |
| │ |
| ├── ProcessData-SFT-Qwen/ # LoRA adapter weights and training configuration |
| │ ├── adapter_config.json |
| │ ├── adapter_model.safetensors |
| │ ├── ... |
| │ └── training_config.json |
| │ |
| ├── ProcessData-SFT-Qwen_results/ # predictions and summary of the post-trained model |
| │ ├── ProcessData-SFT-Qwen_predictions.json |
| │ └── ProcessData-SFT-Qwen_summary.json |
| │ |
| ├── benchmark_card.md # benchmark-level documentation |
| ├── croissant.json # Croissant core + Responsible AI metadata |
| └── README.md |
| ``` |
|
|
| Full upstream videos and full frame dumps are not redistributed in this release. |
|
|
|
|
| ## License and Terms |
|
|
| This release uses `license: other`. |
|
|
| - Derived benchmark metadata in this release remains subject to upstream dataset terms. |
|
|