| | --- |
| | license: apache-2.0 |
| | task_categories: |
| | - visual-question-answering |
| | - image-classification |
| | language: |
| | - en |
| | tags: |
| | - robotics |
| | - failure-detection |
| | - manipulation |
| | - vision-language |
| | - multi-view |
| | pretty_name: Guardian Failure Detection Dataset |
| | size_categories: |
| | - 100<n<10K |
| | configs: |
| | - config_name: metadata_execution |
| | data_files: |
| | - split: train |
| | path: metadata_execution.jsonl |
| | default: true |
| | - config_name: metadata_planning |
| | data_files: |
| | - split: train |
| | path: metadata_planning.jsonl |
| | - config_name: internvl_execution_vanilla |
| | data_files: |
| | - split: train |
| | path: internVL_dataset_execution_vanilla.jsonl |
| | - config_name: internvl_execution_thinking |
| | data_files: |
| | - split: train |
| | path: internVL_dataset_execution_thinking.jsonl |
| | - config_name: internvl_planning_vanilla |
| | data_files: |
| | - split: train |
| | path: internVL_dataset_planning_vanilla.jsonl |
| | - config_name: internvl_planning_thinking |
| | data_files: |
| | - split: train |
| | path: internVL_dataset_planning_thinking.jsonl |
| | --- |
| | |
| | # Guardian Failure Detection Dataset |
| |
|
| | This dataset is part of the **Guardian** project: *Detecting Robotic Planning and Execution Errors with Vision-Language Models*. It contains annotated robotic manipulation failure data for training and evaluating Vision-Language Models (VLMs) on failure detection tasks. |
| |
|
| | Guardian introduces an automated failure generation approach that procedurally perturbs successful robot trajectories to produce diverse **planning failures** and **execution failures**, each annotated with fine-grained failure categories and step-by-step reasoning traces. |
| |
|
| | ## Dataset Family |
| |
|
| | This repository is one split of the Guardian dataset family. The full collection includes: |
| |
|
| | | Dataset | Source | Execution (Train / Val / Test) | Planning (Train / Val / Test) | |
| | |---|---|---|---| |
| | | **RLBench-Fail** | RLBench simulator (52 tasks) | 12,358 / 1,000 / 1,000 | 5,808 / 500 / 500 | |
| | | **BridgeDataV2-Fail** | BridgeDataV2 real-robot data | 7,830 / 1,000 / 1,000 | 4,880 / 500 / 500 | |
| | | **UR5-Fail** | UR5 robot, 3 cameras, 34 tasks | 400 / 30 / 140 | 200 / 30 / 140 | |
| | | **RoboFail** | Existing benchmark (test only) | 153 | 30 | |
| |
|
| | ## Directory Structure |
| |
|
| | Each dataset split is organized as follows: |
| |
|
| | ``` |
| | <dataset_name>/ |
| | ├── metadata_execution.jsonl # Rich metadata for execution verification samples |
| | ├── metadata_planning.jsonl # Rich metadata for planning verification samples |
| | ├── internVL_dataset_execution_vanilla.jsonl # InternVL fine-tuning format (execution, no CoT) |
| | ├── internVL_dataset_execution_thinking.jsonl # InternVL fine-tuning format (execution, with CoT) |
| | ├── internVL_dataset_planning_vanilla.jsonl # InternVL fine-tuning format (planning, no CoT) |
| | ├── internVL_dataset_planning_thinking.jsonl # InternVL fine-tuning format (planning, with CoT) |
| | └── records/ # Images organized by task, failure mode, and episode |
| | └── <taskvar>/ |
| | └── <failure_mode>/ |
| | └── ep_<id>/ |
| | └── <subtask_id>/ |
| | ├── start_img_viewpoint_0.png |
| | ├── start_img_viewpoint_1.png |
| | ├── start_img_viewpoint_2.png |
| | ├── end_img_viewpoint_0.png |
| | ├── end_img_viewpoint_1.png |
| | └── end_img_viewpoint_2.png |
| | ``` |
| |
|
| | ## Task Types |
| |
|
| | ### Execution Verification |
| |
|
| | Given a high-level task goal, a subtask description, and multi-view images **before and after** the subtask execution, the model must determine whether the subtask was completed successfully and categorize the failure mode. |
| |
|
| | - **Input**: 6 images (3 viewpoints x 2 timesteps: start and end) + task instruction + subtask description |
| | - **Output**: Boolean (success/failure) + failure category |
| |
|
| | ### Planning Verification |
| |
|
| | Given a high-level task goal, a proposed plan, and the initial scene image, the model must determine whether the plan is correct and categorize the failure mode. |
| |
|
| | - **Input**: 1 image (front view of the initial scene) + task instruction + proposed plan |
| | - **Output**: Boolean (correct/incorrect) + failure category |
| |
|
| | ## Failure Categories |
| |
|
| | ### Execution Failures |
| |
|
| | | Category | Description | |
| | |---|---| |
| | | `success` | The subtask was completed successfully | |
| | | `no gripper close` | The gripper is correctly positioned but did not close its jaws | |
| | | `imprecise grasping/pushing` | The gripper attempted the correct object but missed due to inaccurate positioning | |
| | | `wrong object state or placement` | The correct object was manipulated but the final state or placement is wrong | |
| | | `wrong object manipulated` | The gripper manipulated the wrong object | |
| | | `no progress` | Neither the scene state nor the robot's configuration changed in any meaningful way | |
| |
|
| | ### Planning Failures |
| |
|
| | | Category | Description | |
| | |---|---| |
| | | `success` | The plan is correct | |
| | | `missing subtasks` | One or several required subtasks are missing from the plan | |
| | | `wrong object manipulated` | One or several subtasks manipulate the wrong object | |
| | | `wrong object state or placement` | One or several subtasks select the wrong target, location, or state | |
| | | `wrong order` | Subtasks are not in the right order, breaking causal dependencies | |
| | | `contradictory subtasks` | Some subtasks conflict with each other | |
| |
|
| | ## Data Formats |
| |
|
| | ### Metadata Files (`metadata_*.jsonl`) |
| | |
| | These files contain rich per-sample annotations. Each line is a JSON object. |
| | |
| | **Execution metadata** (`metadata_execution.jsonl`): |
| |
|
| | ```json |
| | { |
| | "taskvar": "real_hang_mug+1", |
| | "task_instruction": "take the pink mug and put it on the middle part of the hanger", |
| | "episode_id": 0, |
| | "images": [ |
| | "records/real_hang_mug+1/translation_object/ep_0/1/start_img_viewpoint_0.png", |
| | "records/real_hang_mug+1/translation_object/ep_0/1/start_img_viewpoint_1.png", |
| | "records/real_hang_mug+1/translation_object/ep_0/1/start_img_viewpoint_2.png", |
| | "records/real_hang_mug+1/translation_object/ep_0/1/end_img_viewpoint_0.png", |
| | "records/real_hang_mug+1/translation_object/ep_0/1/end_img_viewpoint_1.png", |
| | "records/real_hang_mug+1/translation_object/ep_0/1/end_img_viewpoint_2.png" |
| | ], |
| | "failure_mode": "translation_object", |
| | "detailed_subtask_name": "move grasped object to hanger", |
| | "failure_reason": "the gripper remains closed from start to end...", |
| | "visible_objects": ["pink mug", "blue mug", "green mug", "wooden mug hanger", "navy table top", "robot arm equipped with a gripper"], |
| | "plan": ["grasp pink mug", "move grasped object to hanger", "release"], |
| | "reward": 0, |
| | "planning_reward": 1, |
| | "execution_reward": 0 |
| | } |
| | ``` |
| |
|
| | **Planning metadata** (`metadata_planning.jsonl`): |
| |
|
| | ```json |
| | { |
| | "taskvar": "real_hang_mug+1", |
| | "episode_id": 4, |
| | "task_instruction": "put into the right part of the wooden hanger the mug that is the closest to the wooden hanger", |
| | "plan": ["grasp pink mug", "release pink mug", "move grasped pink mug to right part of hanger", "release pink mug"], |
| | "images": [ |
| | "records/real_hang_mug+1/translation_target/ep_4/0/start_img_viewpoint_0.png" |
| | ], |
| | "planning_reward": 0, |
| | "execution_reward": 1, |
| | "failure_reason": "the plan introduces a contradiction by grasping then immediately releasing", |
| | "detailed_subtask_name": null, |
| | "failure_mode": "contradictory subtasks", |
| | "visible_objects": ["green mug", "pink mug", "blue mug", "wooden cup hanger", "navy table top", "robot arm equipped with a gripper"], |
| | "old_visible_objects": [], |
| | "old_task_instruction": "take the pink mug and put it on the middle part of the hanger", |
| | "old_plan": ["grasp the pink mug", "move the grasped object to the middle part of the hanger", "release"], |
| | "correct_plan": ["grasp pink mug", "move grasped pink mug to right part of hanger", "release pink mug"], |
| | "reward": 0 |
| | } |
| | ``` |
| |
|
| | Key fields: |
| | - `reward`: overall sample reward (0 = failure, 1 = success) |
| | - `planning_reward` / `execution_reward`: indicates which stage failed |
| | - `failure_mode`: fine-grained failure category |
| | - `failure_reason`: natural language explanation of the failure |
| | - `old_*` fields (planning only): the original instruction/plan before perturbation |
| | - `correct_plan` (planning only): ground-truth correct plan |
| |
|
| | ### InternVL Fine-tuning Files (`internVL_dataset_*.jsonl`) |
| | |
| | These files are formatted for direct fine-tuning of InternVL-style models. Each line is a JSON object containing a multi-turn conversation. |
| | |
| | ```json |
| | { |
| | "id": 350, |
| | "image": ["path/to/img1.png", "path/to/img2.png"], |
| | "height_list": [256, 256], |
| | "width_list": [256, 256], |
| | "conversations": [ |
| | {"from": "human", "value": "Multiview images at the start of the subtask: ..."}, |
| | {"from": "gpt", "value": "<answer> True </answer> <category> success </category>"} |
| | ] |
| | } |
| | ``` |
| | |
| | **Vanilla** variants produce direct answers: `<answer> boolean </answer> <category> category </category>` |
| | |
| | **Thinking** variants include chain-of-thought reasoning: `<think> reasoning </think> <answer> boolean </answer> <category> category </category>` |
| | |
| | ## Image Details |
| | |
| | - **Resolution**: 256 x 256 pixels |
| | - **Execution samples**: 6 images (3 viewpoints x 2 timesteps). Viewpoints are front (0), left (1), and right (2). |
| | - **Planning samples**: 1 image (front viewpoint of the initial scene) |
| | - **Format**: PNG |
| | |
| | ## UR5-Fail Specific Details |
| | |
| | The UR5-Fail dataset was collected on a real tabletop setup using a 6-DoF UR5 robot arm equipped with a gripper and three Intel RealSense D435 cameras. The robot runs the 3D-LOTUS++ policy autonomously on 34 manipulation tasks. Subtasks are manually labeled as success or failure for execution data. Planning failures are generated using the automated perturbation method described in the paper. |
| | |
| | ## Usage |
| | |
| | Load the metadata to iterate over samples: |
| | |
| | ```python |
| | import json |
| | from PIL import Image |
| | |
| | with open("metadata_execution.jsonl") as f: |
| | for line in f: |
| | sample = json.loads(line) |
| | images = [Image.open(img_path) for img_path in sample["images"]] |
| | label = sample["execution_reward"] # 1 = success, 0 = failure |
| | category = sample["failure_mode"] |
| | # ... |
| | ``` |
| | |
| | Load InternVL-format data for fine-tuning: |
| | |
| | ```python |
| | import json |
| | |
| | with open("internVL_dataset_execution_thinking.jsonl") as f: |
| | for line in f: |
| | sample = json.loads(line) |
| | image_paths = sample["image"] |
| | conversations = sample["conversations"] |
| | # ... |
| | ``` |
| | |
| | ## Citation |
| | |
| | If you use this dataset, please cite: |
| | |
| | ```bibtex |
| | @inproceedings{ |
| | pacaud2025guardian, |
| | title={Guardian: Detecting Robotic Planning and Execution Errors with Vision-Language Models}, |
| | author={Paul Pacaud and Ricardo Garcia Pinel and Shizhe Chen and Cordelia Schmid}, |
| | booktitle={Workshop on Making Sense of Data in Robotics: Composition, Curation, and Interpretability at Scale at CoRL 2025}, |
| | year={2025}, |
| | url={https://openreview.net/forum?id=wps46mtC9B} |
| | } |
| | ``` |
| | |
| | ## License |
| | |
| | This dataset is released under the Apache 2.0 license. |
| | |