shas232 commited on
Commit
4657207
·
verified ·
1 Parent(s): 118ff78

Add dataset card with full documentation

Browse files
Files changed (1) hide show
  1. README.md +168 -0
README.md ADDED
@@ -0,0 +1,168 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: cc-by-nc-4.0
3
+ task_categories:
4
+ - robotics
5
+ - video-classification
6
+ - depth-estimation
7
+ tags:
8
+ - egocentric
9
+ - rgbd
10
+ - manipulation
11
+ - mcap
12
+ - ros2
13
+ - imu
14
+ - pointcloud
15
+ - depth
16
+ - kitchen
17
+ - household
18
+ - slam
19
+ - hand-tracking
20
+ - body-pose
21
+ pretty_name: "MCAP-Housing: Egocentric RGB-D Manipulation Dataset"
22
+ size_categories:
23
+ - 1K<n<10K
24
+ ---
25
+
26
+ # MCAP-Housing: Egocentric RGB-D Household Manipulation Dataset
27
+
28
+ **MCAP-Housing** is an egocentric **RGB + Depth + IMU** dataset of human household manipulation activities, captured with iPhone LiDAR and packaged in robotics-native `.mcap` (ROS2) format. Designed for robotics research, policy learning, and embodied AI.
29
+
30
+ ---
31
+
32
+ ## Quick Facts
33
+
34
+ | Property | Value |
35
+ |----------|-------|
36
+ | **Modalities** | Synchronized RGB + 16-bit Depth + IMU + Point Clouds |
37
+ | **Resolution (RGB)** | 1920 × 1440 @ 60 FPS |
38
+ | **Depth** | 16-bit millimeter, LiDAR-sourced (256×192 native, aligned to RGB) |
39
+ | **Point Clouds** | Per-frame colored XYZRGB (up to 50k points) |
40
+ | **IMU** | 6-axis (accel + gyro) + magnetometer + gravity + orientation @ 60 Hz |
41
+ | **Pose** | 6DoF camera pose (world → camera transform) per frame |
42
+ | **Activities** | 10 household manipulation sequences |
43
+ | **Total Frames** | ~30,000 synchronized RGB-D pairs |
44
+ | **Total Size** | ~30 GB |
45
+ | **Container** | `.mcap` with ROS2 CDR serialization |
46
+ | **Compression** | zstd (~95% ratio) |
47
+
48
+ ---
49
+
50
+ ## What's Included Per Sequence
51
+
52
+ Each `.mcap` file contains **11 synchronized ROS2 topics**:
53
+
54
+ | Topic | Message Type | Description |
55
+ |-------|-------------|-------------|
56
+ | `/camera/rgb/compressed` | `sensor_msgs/CompressedImage` | JPEG-encoded RGB frames |
57
+ | `/camera/depth/aligned` | `sensor_msgs/Image` | Raw 16-bit depth aligned to RGB |
58
+ | `/camera/depth/filtered` | `sensor_msgs/Image` | Bilateral-filtered depth (hole-filled) |
59
+ | `/camera/depth/colorized` | `sensor_msgs/Image` | Turbo-colormap depth visualization |
60
+ | `/camera/points` | `sensor_msgs/PointCloud2` | Colored XYZRGB point cloud |
61
+ | `/camera/camera_info` | `sensor_msgs/CameraInfo` | Per-frame intrinsics (fx, fy, cx, cy) |
62
+ | `/tf` | `tf2_msgs/TFMessage` | 6DoF camera pose (world → camera) |
63
+ | `/imu` | `sensor_msgs/Imu` | Linear acceleration + angular velocity |
64
+ | `/imu/gravity` | `geometry_msgs/Vector3Stamped` | Gravity vector |
65
+ | `/imu/orientation` | `geometry_msgs/QuaternionStamped` | Device orientation quaternion |
66
+ | `/imu/mag` | `sensor_msgs/MagneticField` | Magnetometer readings |
67
+
68
+ ---
69
+
70
+ ## Sequences
71
+
72
+ | Sequence | Duration | Frames | Size |
73
+ |----------|----------|--------|------|
74
+ | Chopping | 58s | 3,449 | 3.2 GB |
75
+ | Filling water bottles | 57s | 3,442 | 3.5 GB |
76
+ | Folding T shirts | 66s | 3,977 | 3.7 GB |
77
+ | Grating in the Kitchen | 59s | 3,538 | 3.1 GB |
78
+ | Lighting a stove | 25s | 1,507 | 1.4 GB |
79
+ | Peeling Banana | 48s | 2,854 | 2.5 GB |
80
+ | Preparing a solution | 42s | 2,539 | 2.4 GB |
81
+ | Sweeping | 32s | 1,904 | 2.3 GB |
82
+ | Transferring processed spices | 39s | 2,368 | 2.2 GB |
83
+ | Working mortar and Pestle | 112s | 6,701 | 6.3 GB |
84
+
85
+ ---
86
+
87
+ ## Optional Derived Signals (Available on Request)
88
+
89
+ Beyond the raw synchronized streams, the following derived signals can be provided:
90
+
91
+ - **Ego-motion / trajectories** (VIO-style) — smooth, drift-corrected camera trajectories
92
+ - **SLAM reconstructions** — dense maps, optimized trajectories, keyframe selection
93
+ - **Accurate body pose estimation** — full skeletal tracking during manipulation
94
+ - **State-of-the-art 3D hand landmarks** — true 3D hand joint positions, not 2D reprojections
95
+ - **Additional QA layers and consistency checks** tailored to your training setup upon request
96
+
97
+ ---
98
+
99
+ ## Data Quality
100
+
101
+ - Tight RGB ↔ Depth ↔ IMU synchronization (all streams at 60 Hz)
102
+ - Per-frame camera intrinsics (not a single fixed calibration)
103
+ - Per-frame 6DoF pose from ARKit visual-inertial odometry
104
+ - Depth hole-filling and bilateral filtering provided as separate topics
105
+ - zstd compression with ~95% ratio for efficient storage and transfer
106
+
107
+ ---
108
+
109
+ ## Getting Started
110
+
111
+ ### Inspect a file
112
+ ```bash
113
+ pip install mcap
114
+
115
+ mcap info Chopping.mcap
116
+ ```
117
+
118
+ ### Read in Python
119
+ ```python
120
+ from mcap.reader import make_reader
121
+ from mcap_ros2.decoder import DecoderFactory
122
+
123
+ with open("Chopping.mcap", "rb") as f:
124
+ reader = make_reader(f, decoder_factories=[DecoderFactory()])
125
+ for schema, channel, message, decoded in reader.iter_decoded_messages():
126
+ if channel.topic == "/camera/rgb/compressed":
127
+ print(f"RGB frame at t={message.log_time}, size={len(decoded.data)} bytes")
128
+ elif channel.topic == "/camera/depth/aligned":
129
+ print(f"Depth frame: {decoded.width}x{decoded.height}, encoding={decoded.encoding}")
130
+ elif channel.topic == "/camera/points":
131
+ print(f"Point cloud: {decoded.width} points")
132
+ ```
133
+
134
+ ### Visualize
135
+ Open any `.mcap` file directly in [Foxglove Studio](https://foxglove.dev/) for full 3D visualization of RGB, depth, point clouds, and transforms.
136
+
137
+ ### Dependencies
138
+ ```bash
139
+ pip install mcap mcap-ros2-support numpy opencv-python
140
+ ```
141
+
142
+ ---
143
+
144
+ ## Intended Uses
145
+
146
+ - **Policy and skill learning** — imitation learning, VLA pre-training
147
+ - **Action detection and segmentation** — temporal activity recognition
148
+ - **Hand and body pose estimation** — grasp analysis, manipulation understanding
149
+ - **Depth-based reconstruction** — SLAM, scene understanding, 3D mapping
150
+ - **World-model training** — ego-motion prediction, scene dynamics
151
+ - **Sensor fusion research** — RGB-D-IMU alignment and calibration
152
+
153
+ ---
154
+
155
+ ## License
156
+
157
+ This dataset is released under **CC-BY-NC-4.0**. Free for research and non-commercial use with attribution. For commercial licensing, contact us.
158
+
159
+ **Required attribution:** *"This work uses the MCAP-Housing dataset (Cortex Data Labs, 2025)."*
160
+
161
+ ---
162
+
163
+ ## Contact
164
+
165
+ - **Email:** shashin.bhaskar@gmail.com
166
+ - **Organization:** [Cortex Data Labs](https://huggingface.co/cortexdatalabs)
167
+
168
+ For custom data capture, derived signals (hand tracking, SLAM, body pose), or commercial licensing, reach out directly.