File size: 6,121 Bytes
4657207 5da9b1c 4657207 5da9b1c 4657207 5da9b1c 4657207 5da9b1c 4657207 5da9b1c 4657207 5da9b1c 4657207 5da9b1c 4657207 5da9b1c 4657207 5da9b1c |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 |
---
license: cc-by-nc-4.0
task_categories:
- robotics
- video-classification
- depth-estimation
tags:
- egocentric
- rgbd
- manipulation
- mcap
- ros2
- imu
- pointcloud
- depth
- kitchen
- household
- slam
- hand-tracking
- body-pose
pretty_name: "MCAP-Housing: Egocentric RGB-D Manipulation Dataset"
size_categories:
- 1K<n<10K
---
# MCAP-Housing: Egocentric RGB-D Household Manipulation Dataset
**MCAP-Housing** is an egocentric **RGB + Depth + IMU** dataset of human household manipulation activities, packaged in robotics-native `.mcap` (ROS2) format. Designed for robotics research, policy learning, and embodied AI.
This is a **sample release**. We can scale to custom episode counts, new activities, and specific environments on request. Contact us to discuss your requirements.
---
## Quick Facts
| Property | Value |
|----------|-------|
| **Modalities** | Synchronized RGB + 16-bit Depth + IMU + Point Clouds |
| **Resolution (RGB)** | 1920 × 1440 @ 60 FPS |
| **Depth** | 16-bit millimeter, LiDAR-sourced, aligned to RGB |
| **Point Clouds** | Per-frame colored XYZRGB (up to 50k points) |
| **IMU** | 6-axis (accel + gyro) + magnetometer + gravity + orientation @ 60 Hz |
| **Pose** | 6DoF camera pose (world → camera transform) per frame |
| **Activities** | 10 household manipulation sequences |
| **Total Frames** | ~30,000 synchronized RGB-D pairs |
| **Total Size** | ~30 GB |
| **Container** | `.mcap` with ROS2 CDR serialization |
---
## What's Included Per Sequence
Each `.mcap` file contains **11 synchronized ROS2 topics**:
| Topic | Message Type | Description |
|-------|-------------|-------------|
| `/camera/rgb/compressed` | `sensor_msgs/CompressedImage` | JPEG-encoded RGB frames |
| `/camera/depth/aligned` | `sensor_msgs/Image` | Raw 16-bit depth aligned to RGB |
| `/camera/depth/filtered` | `sensor_msgs/Image` | Bilateral-filtered depth (hole-filled) |
| `/camera/depth/colorized` | `sensor_msgs/Image` | Turbo-colormap depth visualization |
| `/camera/points` | `sensor_msgs/PointCloud2` | Colored XYZRGB point cloud |
| `/camera/camera_info` | `sensor_msgs/CameraInfo` | Per-frame intrinsics (fx, fy, cx, cy) |
| `/tf` | `tf2_msgs/TFMessage` | 6DoF camera pose (world → camera) |
| `/imu` | `sensor_msgs/Imu` | Linear acceleration + angular velocity |
| `/imu/gravity` | `geometry_msgs/Vector3Stamped` | Gravity vector |
| `/imu/orientation` | `geometry_msgs/QuaternionStamped` | Device orientation quaternion |
| `/imu/mag` | `sensor_msgs/MagneticField` | Magnetometer readings |
---
## Available on Request
Beyond the raw synchronized streams, the following are available on request:
- **Ego-motion / trajectories** (VIO-style) — smooth, drift-corrected camera trajectories
- **SLAM reconstructions** — dense maps, optimized trajectories, keyframe selection
- **Accurate body pose estimation** — full skeletal tracking during manipulation
- **State-of-the-art 3D hand landmarks** — true 3D hand joint positions, not 2D reprojections
- **QC-validated data** — quality-checked sequences with automated scoring for frame drops, motion blur, depth sanity, and sync integrity
- **Additional QA layers and consistency checks** tailored to your specific training setup
Contact us to discuss which derived signals you need.
---
## Data Quality
- Tight RGB ↔ Depth ↔ IMU synchronization (all streams at 60 Hz)
- Per-frame camera intrinsics (not a single fixed calibration)
- Per-frame 6DoF pose from visual-inertial odometry
- Depth hole-filling and bilateral filtering provided as separate topics
- Full QC reports and filtered datasets available on request
---
## Getting Started
### Inspect a file
```bash
pip install mcap
mcap info Chopping.mcap
```
### Read in Python
```python
from mcap.reader import make_reader
from mcap_ros2.decoder import DecoderFactory
with open("Chopping.mcap", "rb") as f:
reader = make_reader(f, decoder_factories=[DecoderFactory()])
for schema, channel, message, decoded in reader.iter_decoded_messages():
if channel.topic == "/camera/rgb/compressed":
print(f"RGB frame at t={message.log_time}, size={len(decoded.data)} bytes")
elif channel.topic == "/camera/depth/aligned":
print(f"Depth frame: {decoded.width}x{decoded.height}, encoding={decoded.encoding}")
elif channel.topic == "/camera/points":
print(f"Point cloud: {decoded.width} points")
```
### Visualize
Open any `.mcap` file directly in [Foxglove Studio](https://foxglove.dev/) for full 3D visualization of RGB, depth, point clouds, and transforms.
### Dependencies
```bash
pip install mcap mcap-ros2-support numpy opencv-python
```
---
## Intended Uses
- **Policy and skill learning** — imitation learning, VLA pre-training
- **Action detection and segmentation** — temporal activity recognition
- **Hand and body pose estimation** — grasp analysis, manipulation understanding
- **Depth-based reconstruction** — SLAM, scene understanding, 3D mapping
- **World-model training** — ego-motion prediction, scene dynamics
- **Sensor fusion research** — RGB-D-IMU alignment and calibration
---
## Scaling & Custom Data
This release is a sample. We offer:
- **Custom episode capture** — specific activities, environments, and object sets
- **Scalable data collection** — hundreds to thousands of episodes on demand
- **Derived signal pipelines** — hand tracking, body pose, SLAM, tailored to your model
- **Custom QC gates** — filtering and validation matched to your training requirements
Reach out to discuss your needs.
---
## License
This dataset is released under **CC-BY-NC-4.0**. Free for research and non-commercial use with attribution. For commercial licensing, contact us.
**Required attribution:** *"This work uses the MCAP-Housing dataset (Cortex Data Labs, 2025)."*
---
## Contact
- **Email:** shashin.bhaskar@gmail.com
- **Organization:** [Cortex Data Labs](https://huggingface.co/cortexdatalabs)
For custom data capture, derived signals, QC-validated datasets, or commercial licensing, reach out directly.
|