| --- |
| license: apache-2.0 |
| task_categories: |
| - robotics |
| tags: |
| - lerobot |
| - robotics |
| - cable-insertion |
| - manipulation |
| - imitation-learning |
| - vision-language-action |
| - intrinsic |
| - ai-for-industry-challenge |
| - ur5e |
| - sim-to-real |
| configs: |
| - config_name: default |
| data_files: |
| - split: train |
| path: "data/**/*.parquet" |
| --- |
| |
| # AIC Cable Insertion Dataset |
|
|
| ## About the AI for Industry Challenge |
|
|
| This dataset was collected for the [AI for Industry Challenge (AIC)](https://www.intrinsic.ai/events/ai-for-industry-challenge), an open competition by **Intrinsic** (an Alphabet company) for developers and roboticists aimed at solving high-impact problems in robotics and manufacturing. |
|
|
| The challenge task is **cable insertion** — commanding a UR5e robot arm to insert fiber-optic cable plugs (SFP modules and SC connectors) into ports on a configurable task board in simulation (Gazebo). Policies must generalize across randomized board poses, rail positions, and plug/port types. |
|
|
| **Competition Resources** |
| - **Event Page**: [intrinsic.ai/events/ai-for-industry-challenge](https://www.intrinsic.ai/events/ai-for-industry-challenge) |
| - **Toolkit Repository**: [github.com/intrinsic-dev/aic](https://github.com/intrinsic-dev/aic) |
| - **Discussion Forum**: [Open Robotics Discourse](https://discourse.openrobotics.org/c/competitions/ai-for-industry-challenge/) |
|
|
| --- |
|
|
| ## Dataset Description |
|
|
| This dataset contains teleoperated demonstrations of cable insertion tasks recorded from the AIC Gazebo simulation environment as ROS 2 bag files (.mcap), converted to **LeRobot v2.1** format for training Vision-Language-Action (VLA) policies. |
|
|
| ### Key Facts |
|
|
| | Property | Value | |
| |---|---| |
| | **Robot** | UR5e (6-DOF) with impedance controller | |
| | **Simulator** | Gazebo (ROS 2) | |
| | **Episodes** | 5 | |
| | **Cameras** | 3 wrist-mounted (left, center, right) | |
| | **Camera Resolution** | 288×256 (downscaled from 1152×1024 at 0.25×) | |
| | **FPS** | 20 Hz | |
| | **Observation State** | 31-dim (TCP pose + velocity + error + joint positions + F/T wrench) | |
| | **Action Space** | 6-dim Cartesian velocity (linear xyz + angular xyz) | |
| | **Task Types** | SFP module → NIC port, SC plug → SC port | |
|
|
| ### Tasks |
|
|
| Each episode is labeled with a specific language instruction identifying the plug type, target port, and target rail: |
|
|
| | Episode | Task Instruction | |
| |---|---| |
| | 0 | Insert the grasped SFP module into sfp_port_0 on the NIC card mounted on nic_rail_0 | |
| | 1 | Insert the grasped SFP module into sfp_port_0 on the NIC card mounted on nic_rail_2 | |
| | 2 | Insert the grasped SC plug into sc_port_base on SC port 1 mounted on sc_rail_1 | |
| | 3 | Insert the grasped SC plug into sc_port_base on SC port 0 mounted on sc_rail_0 | |
| | 4 | Insert the grasped SFP module into sfp_port_0 on the NIC card mounted on nic_rail_3 | |
|
|
| ### Scene Variation |
|
|
| Each trial features different randomization to encourage policy generalization: |
|
|
| | Episode | Board Yaw (°) | Board Height (m) | Cable Type | Other Components Present | |
| |---|---|---|---|---| |
| | 0 (Trial 1) | ~25° | 1.140 | sfp_sc_cable | NIC cards on rail 0 & 1, SC mount, SFP mount | |
| | 1 (Trial 2) | ~45° | 1.200 | sfp_sc_cable | NIC card on rail 2, LC mount, SFP mount | |
| | 2 (Trial 3) | ~60° | 1.300 | sfp_sc_cable_reversed | SC ports on rail 0 & 1, SFP mount, SC mount, LC mount | |
| | 3 (Trial 5) | ~15° | 1.110 | sfp_sc_cable_reversed | SC port on rail 0, SFP mounts on both rails | |
| | 4 (Trial 7) | ~30° | 1.100 | sfp_sc_cable | NIC cards on rail 0 & 3, SC ports on both rails, LC mount, SFP mount | |
|
|
| --- |
|
|
| ## Data Format and Features |
|
|
| ### Observation State (31-dim) |
|
|
| | Index | Feature | Description | |
| |---|---|---| |
| | 0–2 | `tcp_pose.position.{x,y,z}` | TCP position in base frame | |
| | 3–6 | `tcp_pose.orientation.{x,y,z,w}` | TCP orientation (quaternion) | |
| | 7–9 | `tcp_velocity.linear.{x,y,z}` | TCP linear velocity | |
| | 10–12 | `tcp_velocity.angular.{x,y,z}` | TCP angular velocity | |
| | 13–18 | `tcp_error.{x,y,z,rx,ry,rz}` | Tracking error (current vs. reference) | |
| | 19–24 | `joint_positions.{0–5}` | Joint angles (shoulder_pan → wrist_3) | |
| | 25–27 | `wrench.force.{x,y,z}` | Wrist force-torque sensor (force) | |
| | 28–30 | `wrench.torque.{x,y,z}` | Wrist force-torque sensor (torque) | |
|
|
| ### Action (6-dim Cartesian velocity) |
|
|
| | Index | Feature | Description | |
| |---|---|---| |
| | 0–2 | `linear.{x,y,z}` | Cartesian linear velocity command | |
| | 3–5 | `angular.{x,y,z}` | Cartesian angular velocity command | |
|
|
| ### Camera Views |
|
|
| Three wrist-mounted cameras provide stereo-like coverage of the insertion workspace: |
|
|
| - `observation.images.left_camera` — Left wrist camera (288×256 RGB) |
| - `observation.images.center_camera` — Center wrist camera (288×256 RGB) |
| - `observation.images.right_camera` — Right wrist camera (288×256 RGB) |
|
|
| Videos are stored as MP4 files (H.264, 20 fps). |
|
|
| --- |
|
|
| ## Dataset Structure |
|
|
| ``` |
| aic_lerobot_dataset/ |
| ├── data/ |
| │ └── chunk-000/ |
| │ ├── episode_000000.parquet |
| │ ├── episode_000001.parquet |
| │ ├── episode_000002.parquet |
| │ ├── episode_000003.parquet |
| │ └── episode_000004.parquet |
| ├── meta/ |
| │ ├── info.json |
| │ ├── tasks.jsonl |
| │ ├── episodes.jsonl |
| │ ├── episodes_stats.jsonl |
| │ └── stats.json |
| └── videos/ |
| └── chunk-000/ |
| ├── observation.images.left_camera/ |
| │ └── episode_00000{0-4}.mp4 |
| ├── observation.images.center_camera/ |
| │ └── episode_00000{0-4}.mp4 |
| └── observation.images.right_camera/ |
| └── episode_00000{0-4}.mp4 |
| ``` |
|
|
| --- |
|
|
| ## Usage |
|
|
| ### Loading with LeRobot |
|
|
| ```python |
| from lerobot.datasets.lerobot_dataset import LeRobotDataset |
| |
| dataset = LeRobotDataset("shu4dev/aic-cable-insertion") |
| |
| # Access a frame |
| sample = dataset[0] |
| print(sample["observation.state"].shape) # torch.Size([31]) |
| print(sample["action"].shape) # torch.Size([6]) |
| ``` |
|
|
| ### Loading with HuggingFace Datasets |
|
|
| ```python |
| from datasets import load_dataset |
| |
| ds = load_dataset("shu4dev/aic-cable-insertion") |
| print(ds["train"][0]) |
| ``` |
|
|
| --- |
|
|
| ## Data Collection |
|
|
| Demonstrations were collected via **teleoperation** in the AIC Gazebo simulation environment using the LeRobot integration (`lerobot-record`) with keyboard-based Cartesian control. The robot starts each trial with the cable plug already grasped and positioned within a few centimeters of the target port. |
|
|
| Raw ROS 2 bag data (.mcap files, 10–16 GB each) was converted to LeRobot v2.1 format using a custom streaming converter that: |
|
|
| 1. Filters to only the 8 needed ROS topics (skipping TF, contacts, scoring) |
| 2. Synchronizes all modalities to the center camera timestamps at 20 Hz |
| 3. Extracts observation state from `/aic_controller/controller_state`, `/joint_states`, and `/fts_broadcaster/wrench` |
| 4. Extracts actions from `/aic_controller/pose_commands` (Cartesian velocity mode) |
| 5. Encodes camera streams as H.264 MP4 via direct ffmpeg pipe |
|
|
| --- |
|
|
| ## Intended Use |
|
|
| This dataset is intended for: |
|
|
| - Training **imitation learning** policies (ACT, Diffusion Policy, etc.) |
| - Training **VLA models** (π0, GR00T, OpenVLA, etc.) with language-conditioned cable insertion |
| - Benchmarking sim-to-sim transfer for contact-rich manipulation |
| - Research on fine-grained insertion tasks with force feedback |
|
|
| --- |
|
|
| ## Citation |
|
|
| If you use this dataset, please cite the AI for Industry Challenge: |
|
|
| ``` |
| @misc{aic2026, |
| title={AI for Industry Challenge Toolkit}, |
| author={Intrinsic Innovation LLC}, |
| year={2026}, |
| url={https://github.com/intrinsic-dev/aic} |
| } |
| ``` |
|
|
| ## License |
|
|
| Apache License 2.0 |