video
video |
|---|
MCAP-Housing: Egocentric RGB-D Household Manipulation Dataset
MCAP-Housing is an egocentric RGB + Depth + IMU dataset of human household manipulation activities, packaged in robotics-native .mcap (ROS2) format. Designed for robotics research, policy learning, and embodied AI.
This is a sample release. We can scale to custom episode counts, new activities, and specific environments on request. Contact us to discuss your requirements.
Quick Facts
| Property | Value |
|---|---|
| Modalities | Synchronized RGB + 16-bit Depth + IMU + Point Clouds |
| Resolution (RGB) | 1920 × 1440 @ 60 FPS |
| Depth | 16-bit millimeter, LiDAR-sourced, aligned to RGB |
| Point Clouds | Per-frame colored XYZRGB (up to 50k points) |
| IMU | 6-axis (accel + gyro) + magnetometer + gravity + orientation @ 60 Hz |
| Pose | 6DoF camera pose (world → camera transform) per frame |
| Activities | 10 household manipulation sequences |
| Total Frames | ~30,000 synchronized RGB-D pairs |
| Total Size | ~30 GB |
| Container | .mcap with ROS2 CDR serialization |
What's Included Per Sequence
Each .mcap file contains 11 synchronized ROS2 topics:
| Topic | Message Type | Description |
|---|---|---|
/camera/rgb/compressed |
sensor_msgs/CompressedImage |
JPEG-encoded RGB frames |
/camera/depth/aligned |
sensor_msgs/Image |
Raw 16-bit depth aligned to RGB |
/camera/depth/filtered |
sensor_msgs/Image |
Bilateral-filtered depth (hole-filled) |
/camera/depth/colorized |
sensor_msgs/Image |
Turbo-colormap depth visualization |
/camera/points |
sensor_msgs/PointCloud2 |
Colored XYZRGB point cloud |
/camera/camera_info |
sensor_msgs/CameraInfo |
Per-frame intrinsics (fx, fy, cx, cy) |
/tf |
tf2_msgs/TFMessage |
6DoF camera pose (world → camera) |
/imu |
sensor_msgs/Imu |
Linear acceleration + angular velocity |
/imu/gravity |
geometry_msgs/Vector3Stamped |
Gravity vector |
/imu/orientation |
geometry_msgs/QuaternionStamped |
Device orientation quaternion |
/imu/mag |
sensor_msgs/MagneticField |
Magnetometer readings |
Available on Request
Beyond the raw synchronized streams, the following are available on request:
- Ego-motion / trajectories (VIO-style) — smooth, drift-corrected camera trajectories
- SLAM reconstructions — dense maps, optimized trajectories, keyframe selection
- Accurate body pose estimation — full skeletal tracking during manipulation
- State-of-the-art 3D hand landmarks — true 3D hand joint positions, not 2D reprojections
- QC-validated data — quality-checked sequences with automated scoring for frame drops, motion blur, depth sanity, and sync integrity
- Additional QA layers and consistency checks tailored to your specific training setup
Contact us to discuss which derived signals you need.
Data Quality
- Tight RGB ↔ Depth ↔ IMU synchronization (all streams at 60 Hz)
- Per-frame camera intrinsics (not a single fixed calibration)
- Per-frame 6DoF pose from visual-inertial odometry
- Depth hole-filling and bilateral filtering provided as separate topics
- Full QC reports and filtered datasets available on request
Getting Started
Inspect a file
pip install mcap
mcap info Chopping.mcap
Read in Python
from mcap.reader import make_reader
from mcap_ros2.decoder import DecoderFactory
with open("Chopping.mcap", "rb") as f:
reader = make_reader(f, decoder_factories=[DecoderFactory()])
for schema, channel, message, decoded in reader.iter_decoded_messages():
if channel.topic == "/camera/rgb/compressed":
print(f"RGB frame at t={message.log_time}, size={len(decoded.data)} bytes")
elif channel.topic == "/camera/depth/aligned":
print(f"Depth frame: {decoded.width}x{decoded.height}, encoding={decoded.encoding}")
elif channel.topic == "/camera/points":
print(f"Point cloud: {decoded.width} points")
Visualize
Open any .mcap file directly in Foxglove Studio for full 3D visualization of RGB, depth, point clouds, and transforms.
Dependencies
pip install mcap mcap-ros2-support numpy opencv-python
Intended Uses
- Policy and skill learning — imitation learning, VLA pre-training
- Action detection and segmentation — temporal activity recognition
- Hand and body pose estimation — grasp analysis, manipulation understanding
- Depth-based reconstruction — SLAM, scene understanding, 3D mapping
- World-model training — ego-motion prediction, scene dynamics
- Sensor fusion research — RGB-D-IMU alignment and calibration
Scaling & Custom Data
This release is a sample. We offer:
- Custom episode capture — specific activities, environments, and object sets
- Scalable data collection — hundreds to thousands of episodes on demand
- Derived signal pipelines — hand tracking, body pose, SLAM, tailored to your model
- Custom QC gates — filtering and validation matched to your training requirements
Reach out to discuss your needs.
License
This dataset is released under CC-BY-NC-4.0. Free for research and non-commercial use with attribution. For commercial licensing, contact us.
Required attribution: "This work uses the MCAP-Housing dataset (Cortex Data Labs, 2025)."
Contact
- Email: shashin.bhaskar@gmail.com
- Organization: Cortex Data Labs
For custom data capture, derived signals, QC-validated datasets, or commercial licensing, reach out directly.
- Downloads last month
- 2