# RoboPulse RoboPulse is a benchmark introduced in [PRM-as-a-Judge: A Dense Evaluation Paradigm for Fine-Grained Robotic Auditing](https://arxiv.org/abs/2603.21669) for testing whether a vision-language judge can detect fine-grained relative progress in physical manipulation. This Hugging Face release contains the hard `1800`-example subset. Each example asks the judge to compare a `BEFORE` state and an `AFTER` state under the same task, while using task-start and task-end reference frames as anchors for the full task scope. ## Overview The figure below illustrates the multi-view comparison setup in RoboPulse. ![RoboPulse visual overview](./robopulse_vis.png) ## Files - `RoboPulse.json`: benchmark annotations with release-relative image paths - `images.zip`: zipped image assets - `README.md`: dataset overview and field definitions - `robopulse_vis.png`: multi-view comparison illustration - `robopulse_stat.png`: dataset coverage statistics - `results.png`: benchmark results table from the paper If you want the image paths in `RoboPulse.json` to resolve locally, extract `images.zip` in the same folder so that the `images/` directory sits next to `RoboPulse.json`. ## Dataset Summary - Number of samples: `1800` - Image references in JSON: `14400` - Unique image files: `13059` - Total source image size: `345.72 MB` - Archive size: `340.97 MB` - Source datasets: `9` - Hop magnitude bins: `small`, `medium`, `large` Source datasets in this release: - `agibotworld`: `200` samples - `agilex_newdragon`: `200` samples - `droid_oxe`: `200` samples - `galaxea_r1lite`: `200` samples - `human_egodex`: `200` samples - `human_pika`: `200` samples - `libero_data`: `200` samples - `robocasa_data`: `200` samples - `robotwin2_agilex_part1`: `200` samples The figure below summarizes the coverage of RoboPulse across data sources and task semantics. ![RoboPulse statistics](./robopulse_stat.png) ## Results The figure below shows the main pairwise progress-judgment results reported for RoboPulse. ![RoboPulse results](./results.png) ## Data Format Each item in `RoboPulse.json` is a dictionary with the following fields: - `id`: unique sample identifier - `task`: task instruction for the sample - `image_dataset`: source dataset name - `image`: a list of `8` image paths, all relative to this release folder - `conversations`: question-answer style supervision for the judge - `hop_value`: signed Hop value used to construct the sample pair - `hop_absolute_value`: absolute value of `hop_value` - `hop_category`: categorical metadata derived from `hop_value` ### Image Ordering `image[0]` to `image[7]` always follow the same order: 1. `image[0]`: reference start frame for the task 2. `image[1]`: reference end frame for the completed task 3. `image[2]`: front view of the `BEFORE` state 4. `image[3]`: left wrist view of the `BEFORE` state 5. `image[4]`: right wrist view of the `BEFORE` state 6. `image[5]`: front view of the `AFTER` state 7. `image[6]`: left wrist view of the `AFTER` state 8. `image[7]`: right wrist view of the `AFTER` state In other words, the benchmark compares a `BEFORE` triplet against an `AFTER` triplet, with start and end reference frames provided as conceptual anchors. ### Conversations `conversations` stores the judge prompt and the target answer: - `conversations[0]`: the evaluation question given to the judge model - `conversations[1]`: the expected answer, such as `+1` for progress and `-1` for regression ### Hop Fields The Hop-based fields describe the relative progress signal used to build RoboPulse. For the detailed formulation, please refer to Appendix F of the paper: - Paper: [PRM-as-a-Judge](https://arxiv.org/abs/2603.21669) - PDF: [https://arxiv.org/pdf/2603.21669](https://arxiv.org/pdf/2603.21669) Field meanings: - `hop_value`: signed relative progress change between the two compared states. Positive values indicate forward progress toward the task goal, while negative values indicate regression away from the goal. - `hop_absolute_value`: magnitude of the progress change, ignoring direction. - `hop_category`: a dictionary with three subfields: `absolute_category`, `direction`, and `combined_category` - `absolute_category`: magnitude bucket of the Hop value, one of `small`, `medium`, or `large` - `direction`: direction bucket, either `progress` (forward) or `regression` (backward) - `combined_category`: combination of the two, such as `progress_small`, `progress_medium`, `progress_large`, `regression_small`, `regression_medium`, or `regression_large` ## Directory Layout After extracting `images.zip`, the folder should look like this: ```text hf_RoboPulse/ ├── RoboPulse.json ├── images.zip ├── README.md └── images/ └── / └── ... ``` ## Usage Notes - Upload the whole folder to your Hugging Face dataset repository. - If you want image paths in the JSON to be directly readable from the repo, extract `images.zip` before or after uploading so that `images/` exists alongside `RoboPulse.json`. - The release preserves the original benchmark annotations and only rewrites image paths to release-relative paths under `images/`. ## Related Links - Project repository: [PRM-as-a-Judge](https://github.com/Yuheng2000/PRM-as-a-Judge) - Paper: [PRM-as-a-Judge: A Dense Evaluation Paradigm for Fine-Grained Robotic Auditing](https://arxiv.org/abs/2603.21669) ## Citation If this project, leaderboard, or evaluation pipeline helps your work, please cite: ```bibtex @article{ji2026prmjudge, title = {PRM-as-a-Judge: A Dense Evaluation Paradigm for Fine-Grained Robotic Auditing}, author = {Ji, Yuheng and Liu, Yuyang and Tan, Huajie and Huang, Xuchuan and Huang, Fanding and Xu, Yijie and Chi, Cheng and Zhao, Yuting and Lyu, Huaihai and Co, Peterson and others}, journal = {arXiv preprint arXiv:2603.21669}, year = {2026} } ```