shu4dev commited on
Commit
6c228de
·
verified ·
1 Parent(s): 41cd1e1

Upload folder using huggingface_hub

Browse files
Files changed (2) hide show
  1. .ipynb_checkpoints/README-checkpoint.md +211 -0
  2. README.md +211 -0
.ipynb_checkpoints/README-checkpoint.md ADDED
@@ -0,0 +1,211 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ task_categories:
4
+ - robotics
5
+ tags:
6
+ - lerobot
7
+ - robotics
8
+ - cable-insertion
9
+ - manipulation
10
+ - imitation-learning
11
+ - vision-language-action
12
+ - intrinsic
13
+ - ai-for-industry-challenge
14
+ - ur5e
15
+ - sim-to-real
16
+ configs:
17
+ - config_name: default
18
+ data_files:
19
+ - split: train
20
+ path: "data/**/*.parquet"
21
+ ---
22
+
23
+ # AIC Cable Insertion Dataset
24
+
25
+ ## About the AI for Industry Challenge
26
+
27
+ This dataset was collected for the [AI for Industry Challenge (AIC)](https://www.intrinsic.ai/events/ai-for-industry-challenge), an open competition by **Intrinsic** (an Alphabet company) for developers and roboticists aimed at solving high-impact problems in robotics and manufacturing.
28
+
29
+ The challenge task is **cable insertion** — commanding a UR5e robot arm to insert fiber-optic cable plugs (SFP modules and SC connectors) into ports on a configurable task board in simulation (Gazebo). Policies must generalize across randomized board poses, rail positions, and plug/port types.
30
+
31
+ **Competition Resources**
32
+ - **Event Page**: [intrinsic.ai/events/ai-for-industry-challenge](https://www.intrinsic.ai/events/ai-for-industry-challenge)
33
+ - **Toolkit Repository**: [github.com/intrinsic-dev/aic](https://github.com/intrinsic-dev/aic)
34
+ - **Discussion Forum**: [Open Robotics Discourse](https://discourse.openrobotics.org/c/competitions/ai-for-industry-challenge/)
35
+
36
+ ---
37
+
38
+ ## Dataset Description
39
+
40
+ This dataset contains teleoperated demonstrations of cable insertion tasks recorded from the AIC Gazebo simulation environment as ROS 2 bag files (.mcap), converted to **LeRobot v2.1** format for training Vision-Language-Action (VLA) policies.
41
+
42
+ ### Key Facts
43
+
44
+ | Property | Value |
45
+ |---|---|
46
+ | **Robot** | UR5e (6-DOF) with impedance controller |
47
+ | **Simulator** | Gazebo (ROS 2) |
48
+ | **Episodes** | 5 |
49
+ | **Cameras** | 3 wrist-mounted (left, center, right) |
50
+ | **Camera Resolution** | 288×256 (downscaled from 1152×1024 at 0.25×) |
51
+ | **FPS** | 20 Hz |
52
+ | **Observation State** | 31-dim (TCP pose + velocity + error + joint positions + F/T wrench) |
53
+ | **Action Space** | 6-dim Cartesian velocity (linear xyz + angular xyz) |
54
+ | **Task Types** | SFP module → NIC port, SC plug → SC port |
55
+
56
+ ### Tasks
57
+
58
+ Each episode is labeled with a specific language instruction identifying the plug type, target port, and target rail:
59
+
60
+ | Episode | Task Instruction |
61
+ |---|---|
62
+ | 0 | Insert the grasped SFP module into sfp_port_0 on the NIC card mounted on nic_rail_0 |
63
+ | 1 | Insert the grasped SFP module into sfp_port_0 on the NIC card mounted on nic_rail_2 |
64
+ | 2 | Insert the grasped SC plug into sc_port_base on SC port 1 mounted on sc_rail_1 |
65
+ | 3 | Insert the grasped SC plug into sc_port_base on SC port 0 mounted on sc_rail_0 |
66
+ | 4 | Insert the grasped SFP module into sfp_port_0 on the NIC card mounted on nic_rail_3 |
67
+
68
+ ### Scene Variation
69
+
70
+ Each trial features different randomization to encourage policy generalization:
71
+
72
+ | Episode | Board Yaw (°) | Board Height (m) | Cable Type | Other Components Present |
73
+ |---|---|---|---|---|
74
+ | 0 (Trial 1) | ~25° | 1.140 | sfp_sc_cable | NIC cards on rail 0 & 1, SC mount, SFP mount |
75
+ | 1 (Trial 2) | ~45° | 1.200 | sfp_sc_cable | NIC card on rail 2, LC mount, SFP mount |
76
+ | 2 (Trial 3) | ~60° | 1.300 | sfp_sc_cable_reversed | SC ports on rail 0 & 1, SFP mount, SC mount, LC mount |
77
+ | 3 (Trial 5) | ~15° | 1.110 | sfp_sc_cable_reversed | SC port on rail 0, SFP mounts on both rails |
78
+ | 4 (Trial 7) | ~30° | 1.100 | sfp_sc_cable | NIC cards on rail 0 & 3, SC ports on both rails, LC mount, SFP mount |
79
+
80
+ ---
81
+
82
+ ## Data Format and Features
83
+
84
+ ### Observation State (31-dim)
85
+
86
+ | Index | Feature | Description |
87
+ |---|---|---|
88
+ | 0–2 | `tcp_pose.position.{x,y,z}` | TCP position in base frame |
89
+ | 3–6 | `tcp_pose.orientation.{x,y,z,w}` | TCP orientation (quaternion) |
90
+ | 7–9 | `tcp_velocity.linear.{x,y,z}` | TCP linear velocity |
91
+ | 10–12 | `tcp_velocity.angular.{x,y,z}` | TCP angular velocity |
92
+ | 13–18 | `tcp_error.{x,y,z,rx,ry,rz}` | Tracking error (current vs. reference) |
93
+ | 19–24 | `joint_positions.{0–5}` | Joint angles (shoulder_pan → wrist_3) |
94
+ | 25–27 | `wrench.force.{x,y,z}` | Wrist force-torque sensor (force) |
95
+ | 28–30 | `wrench.torque.{x,y,z}` | Wrist force-torque sensor (torque) |
96
+
97
+ ### Action (6-dim Cartesian velocity)
98
+
99
+ | Index | Feature | Description |
100
+ |---|---|---|
101
+ | 0–2 | `linear.{x,y,z}` | Cartesian linear velocity command |
102
+ | 3–5 | `angular.{x,y,z}` | Cartesian angular velocity command |
103
+
104
+ ### Camera Views
105
+
106
+ Three wrist-mounted cameras provide stereo-like coverage of the insertion workspace:
107
+
108
+ - `observation.images.left_camera` — Left wrist camera (288×256 RGB)
109
+ - `observation.images.center_camera` — Center wrist camera (288×256 RGB)
110
+ - `observation.images.right_camera` — Right wrist camera (288×256 RGB)
111
+
112
+ Videos are stored as MP4 files (H.264, 20 fps).
113
+
114
+ ---
115
+
116
+ ## Dataset Structure
117
+
118
+ ```
119
+ aic_lerobot_dataset/
120
+ ├── data/
121
+ │ └── chunk-000/
122
+ │ ├── episode_000000.parquet
123
+ │ ├── episode_000001.parquet
124
+ │ ├── episode_000002.parquet
125
+ │ ├── episode_000003.parquet
126
+ │ └── episode_000004.parquet
127
+ ├── meta/
128
+ │ ├── info.json
129
+ │ ├── tasks.jsonl
130
+ │ ├── episodes.jsonl
131
+ │ ├── episodes_stats.jsonl
132
+ │ └── stats.json
133
+ └── videos/
134
+ └── chunk-000/
135
+ ├── observation.images.left_camera/
136
+ │ └── episode_00000{0-4}.mp4
137
+ ├── observation.images.center_camera/
138
+ │ └── episode_00000{0-4}.mp4
139
+ └── observation.images.right_camera/
140
+ └── episode_00000{0-4}.mp4
141
+ ```
142
+
143
+ ---
144
+
145
+ ## Usage
146
+
147
+ ### Loading with LeRobot
148
+
149
+ ```python
150
+ from lerobot.datasets.lerobot_dataset import LeRobotDataset
151
+
152
+ dataset = LeRobotDataset("shu4dev/aic-cable-insertion")
153
+
154
+ # Access a frame
155
+ sample = dataset[0]
156
+ print(sample["observation.state"].shape) # torch.Size([31])
157
+ print(sample["action"].shape) # torch.Size([6])
158
+ ```
159
+
160
+ ### Loading with HuggingFace Datasets
161
+
162
+ ```python
163
+ from datasets import load_dataset
164
+
165
+ ds = load_dataset("shu4dev/aic-cable-insertion")
166
+ print(ds["train"][0])
167
+ ```
168
+
169
+ ---
170
+
171
+ ## Data Collection
172
+
173
+ Demonstrations were collected via **teleoperation** in the AIC Gazebo simulation environment using the LeRobot integration (`lerobot-record`) with keyboard-based Cartesian control. The robot starts each trial with the cable plug already grasped and positioned within a few centimeters of the target port.
174
+
175
+ Raw ROS 2 bag data (.mcap files, 10–16 GB each) was converted to LeRobot v2.1 format using a custom streaming converter that:
176
+
177
+ 1. Filters to only the 8 needed ROS topics (skipping TF, contacts, scoring)
178
+ 2. Synchronizes all modalities to the center camera timestamps at 20 Hz
179
+ 3. Extracts observation state from `/aic_controller/controller_state`, `/joint_states`, and `/fts_broadcaster/wrench`
180
+ 4. Extracts actions from `/aic_controller/pose_commands` (Cartesian velocity mode)
181
+ 5. Encodes camera streams as H.264 MP4 via direct ffmpeg pipe
182
+
183
+ ---
184
+
185
+ ## Intended Use
186
+
187
+ This dataset is intended for:
188
+
189
+ - Training **imitation learning** policies (ACT, Diffusion Policy, etc.)
190
+ - Training **VLA models** (π0, GR00T, OpenVLA, etc.) with language-conditioned cable insertion
191
+ - Benchmarking sim-to-sim transfer for contact-rich manipulation
192
+ - Research on fine-grained insertion tasks with force feedback
193
+
194
+ ---
195
+
196
+ ## Citation
197
+
198
+ If you use this dataset, please cite the AI for Industry Challenge:
199
+
200
+ ```
201
+ @misc{aic2026,
202
+ title={AI for Industry Challenge Toolkit},
203
+ author={Intrinsic Innovation LLC},
204
+ year={2026},
205
+ url={https://github.com/intrinsic-dev/aic}
206
+ }
207
+ ```
208
+
209
+ ## License
210
+
211
+ Apache License 2.0
README.md ADDED
@@ -0,0 +1,211 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ task_categories:
4
+ - robotics
5
+ tags:
6
+ - lerobot
7
+ - robotics
8
+ - cable-insertion
9
+ - manipulation
10
+ - imitation-learning
11
+ - vision-language-action
12
+ - intrinsic
13
+ - ai-for-industry-challenge
14
+ - ur5e
15
+ - sim-to-real
16
+ configs:
17
+ - config_name: default
18
+ data_files:
19
+ - split: train
20
+ path: "data/**/*.parquet"
21
+ ---
22
+
23
+ # AIC Cable Insertion Dataset
24
+
25
+ ## About the AI for Industry Challenge
26
+
27
+ This dataset was collected for the [AI for Industry Challenge (AIC)](https://www.intrinsic.ai/events/ai-for-industry-challenge), an open competition by **Intrinsic** (an Alphabet company) for developers and roboticists aimed at solving high-impact problems in robotics and manufacturing.
28
+
29
+ The challenge task is **cable insertion** — commanding a UR5e robot arm to insert fiber-optic cable plugs (SFP modules and SC connectors) into ports on a configurable task board in simulation (Gazebo). Policies must generalize across randomized board poses, rail positions, and plug/port types.
30
+
31
+ **Competition Resources**
32
+ - **Event Page**: [intrinsic.ai/events/ai-for-industry-challenge](https://www.intrinsic.ai/events/ai-for-industry-challenge)
33
+ - **Toolkit Repository**: [github.com/intrinsic-dev/aic](https://github.com/intrinsic-dev/aic)
34
+ - **Discussion Forum**: [Open Robotics Discourse](https://discourse.openrobotics.org/c/competitions/ai-for-industry-challenge/)
35
+
36
+ ---
37
+
38
+ ## Dataset Description
39
+
40
+ This dataset contains teleoperated demonstrations of cable insertion tasks recorded from the AIC Gazebo simulation environment as ROS 2 bag files (.mcap), converted to **LeRobot v2.1** format for training Vision-Language-Action (VLA) policies.
41
+
42
+ ### Key Facts
43
+
44
+ | Property | Value |
45
+ |---|---|
46
+ | **Robot** | UR5e (6-DOF) with impedance controller |
47
+ | **Simulator** | Gazebo (ROS 2) |
48
+ | **Episodes** | 5 |
49
+ | **Cameras** | 3 wrist-mounted (left, center, right) |
50
+ | **Camera Resolution** | 288×256 (downscaled from 1152×1024 at 0.25×) |
51
+ | **FPS** | 20 Hz |
52
+ | **Observation State** | 31-dim (TCP pose + velocity + error + joint positions + F/T wrench) |
53
+ | **Action Space** | 6-dim Cartesian velocity (linear xyz + angular xyz) |
54
+ | **Task Types** | SFP module → NIC port, SC plug → SC port |
55
+
56
+ ### Tasks
57
+
58
+ Each episode is labeled with a specific language instruction identifying the plug type, target port, and target rail:
59
+
60
+ | Episode | Task Instruction |
61
+ |---|---|
62
+ | 0 | Insert the grasped SFP module into sfp_port_0 on the NIC card mounted on nic_rail_0 |
63
+ | 1 | Insert the grasped SFP module into sfp_port_0 on the NIC card mounted on nic_rail_2 |
64
+ | 2 | Insert the grasped SC plug into sc_port_base on SC port 1 mounted on sc_rail_1 |
65
+ | 3 | Insert the grasped SC plug into sc_port_base on SC port 0 mounted on sc_rail_0 |
66
+ | 4 | Insert the grasped SFP module into sfp_port_0 on the NIC card mounted on nic_rail_3 |
67
+
68
+ ### Scene Variation
69
+
70
+ Each trial features different randomization to encourage policy generalization:
71
+
72
+ | Episode | Board Yaw (°) | Board Height (m) | Cable Type | Other Components Present |
73
+ |---|---|---|---|---|
74
+ | 0 (Trial 1) | ~25° | 1.140 | sfp_sc_cable | NIC cards on rail 0 & 1, SC mount, SFP mount |
75
+ | 1 (Trial 2) | ~45° | 1.200 | sfp_sc_cable | NIC card on rail 2, LC mount, SFP mount |
76
+ | 2 (Trial 3) | ~60° | 1.300 | sfp_sc_cable_reversed | SC ports on rail 0 & 1, SFP mount, SC mount, LC mount |
77
+ | 3 (Trial 5) | ~15° | 1.110 | sfp_sc_cable_reversed | SC port on rail 0, SFP mounts on both rails |
78
+ | 4 (Trial 7) | ~30° | 1.100 | sfp_sc_cable | NIC cards on rail 0 & 3, SC ports on both rails, LC mount, SFP mount |
79
+
80
+ ---
81
+
82
+ ## Data Format and Features
83
+
84
+ ### Observation State (31-dim)
85
+
86
+ | Index | Feature | Description |
87
+ |---|---|---|
88
+ | 0–2 | `tcp_pose.position.{x,y,z}` | TCP position in base frame |
89
+ | 3–6 | `tcp_pose.orientation.{x,y,z,w}` | TCP orientation (quaternion) |
90
+ | 7–9 | `tcp_velocity.linear.{x,y,z}` | TCP linear velocity |
91
+ | 10–12 | `tcp_velocity.angular.{x,y,z}` | TCP angular velocity |
92
+ | 13–18 | `tcp_error.{x,y,z,rx,ry,rz}` | Tracking error (current vs. reference) |
93
+ | 19–24 | `joint_positions.{0–5}` | Joint angles (shoulder_pan → wrist_3) |
94
+ | 25–27 | `wrench.force.{x,y,z}` | Wrist force-torque sensor (force) |
95
+ | 28–30 | `wrench.torque.{x,y,z}` | Wrist force-torque sensor (torque) |
96
+
97
+ ### Action (6-dim Cartesian velocity)
98
+
99
+ | Index | Feature | Description |
100
+ |---|---|---|
101
+ | 0–2 | `linear.{x,y,z}` | Cartesian linear velocity command |
102
+ | 3–5 | `angular.{x,y,z}` | Cartesian angular velocity command |
103
+
104
+ ### Camera Views
105
+
106
+ Three wrist-mounted cameras provide stereo-like coverage of the insertion workspace:
107
+
108
+ - `observation.images.left_camera` — Left wrist camera (288×256 RGB)
109
+ - `observation.images.center_camera` — Center wrist camera (288×256 RGB)
110
+ - `observation.images.right_camera` — Right wrist camera (288×256 RGB)
111
+
112
+ Videos are stored as MP4 files (H.264, 20 fps).
113
+
114
+ ---
115
+
116
+ ## Dataset Structure
117
+
118
+ ```
119
+ aic_lerobot_dataset/
120
+ ├── data/
121
+ │ └── chunk-000/
122
+ │ ├── episode_000000.parquet
123
+ │ ├── episode_000001.parquet
124
+ │ ├── episode_000002.parquet
125
+ │ ├── episode_000003.parquet
126
+ │ └── episode_000004.parquet
127
+ ├── meta/
128
+ │ ├── info.json
129
+ │ ├── tasks.jsonl
130
+ │ ├── episodes.jsonl
131
+ │ ├── episodes_stats.jsonl
132
+ │ └── stats.json
133
+ └── videos/
134
+ └── chunk-000/
135
+ ├── observation.images.left_camera/
136
+ │ └── episode_00000{0-4}.mp4
137
+ ├── observation.images.center_camera/
138
+ │ └── episode_00000{0-4}.mp4
139
+ └── observation.images.right_camera/
140
+ └── episode_00000{0-4}.mp4
141
+ ```
142
+
143
+ ---
144
+
145
+ ## Usage
146
+
147
+ ### Loading with LeRobot
148
+
149
+ ```python
150
+ from lerobot.datasets.lerobot_dataset import LeRobotDataset
151
+
152
+ dataset = LeRobotDataset("shu4dev/aic-cable-insertion")
153
+
154
+ # Access a frame
155
+ sample = dataset[0]
156
+ print(sample["observation.state"].shape) # torch.Size([31])
157
+ print(sample["action"].shape) # torch.Size([6])
158
+ ```
159
+
160
+ ### Loading with HuggingFace Datasets
161
+
162
+ ```python
163
+ from datasets import load_dataset
164
+
165
+ ds = load_dataset("shu4dev/aic-cable-insertion")
166
+ print(ds["train"][0])
167
+ ```
168
+
169
+ ---
170
+
171
+ ## Data Collection
172
+
173
+ Demonstrations were collected via **teleoperation** in the AIC Gazebo simulation environment using the LeRobot integration (`lerobot-record`) with keyboard-based Cartesian control. The robot starts each trial with the cable plug already grasped and positioned within a few centimeters of the target port.
174
+
175
+ Raw ROS 2 bag data (.mcap files, 10–16 GB each) was converted to LeRobot v2.1 format using a custom streaming converter that:
176
+
177
+ 1. Filters to only the 8 needed ROS topics (skipping TF, contacts, scoring)
178
+ 2. Synchronizes all modalities to the center camera timestamps at 20 Hz
179
+ 3. Extracts observation state from `/aic_controller/controller_state`, `/joint_states`, and `/fts_broadcaster/wrench`
180
+ 4. Extracts actions from `/aic_controller/pose_commands` (Cartesian velocity mode)
181
+ 5. Encodes camera streams as H.264 MP4 via direct ffmpeg pipe
182
+
183
+ ---
184
+
185
+ ## Intended Use
186
+
187
+ This dataset is intended for:
188
+
189
+ - Training **imitation learning** policies (ACT, Diffusion Policy, etc.)
190
+ - Training **VLA models** (π0, GR00T, OpenVLA, etc.) with language-conditioned cable insertion
191
+ - Benchmarking sim-to-sim transfer for contact-rich manipulation
192
+ - Research on fine-grained insertion tasks with force feedback
193
+
194
+ ---
195
+
196
+ ## Citation
197
+
198
+ If you use this dataset, please cite the AI for Industry Challenge:
199
+
200
+ ```
201
+ @misc{aic2026,
202
+ title={AI for Industry Challenge Toolkit},
203
+ author={Intrinsic Innovation LLC},
204
+ year={2026},
205
+ url={https://github.com/intrinsic-dev/aic}
206
+ }
207
+ ```
208
+
209
+ ## License
210
+
211
+ Apache License 2.0