File size: 6,188 Bytes
63b7c45 618ee52 63b7c45 bc55827 63b7c45 bc55827 63b7c45 bc55827 618ee52 bc55827 63b7c45 bc55827 caa2038 63b7c45 caa2038 63b7c45 bc55827 63b7c45 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 | ---
license: other
license_name: fair-noncommercial-research-license-v1
license_link: https://huggingface.co/datasets/Rice-RobotPI-Lab/egoinfinity/blob/main/LICENSE-Action100M
pretty_name: EgoInfinity
viewer: false
tags:
- egocentric
- hand-tracking
- 3d-scene
- video
- action-recognition
- derivative-of-action100m
---
# EgoInfinity Dataset
Derivative scene assets for a curated subset of [Action100M] (Meta FAIR)
clips. Used as the data backend for the
[EgoInfinity Browser](https://huggingface.co/spaces/Rice-RobotPI-Lab/egoinfinity)
Space.
Source code: <https://github.com/Rice-RobotPI-Lab/EgoInfinity>
[Action100M]: https://github.com/facebookresearch/Action100M
## Contents
```
samples/
├── index.json # browse-time episode list (consumed by the Space)
└── <clip_id>/
├── scene.json # camera intrinsics, object metadata, asset paths
├── signals.json # per-frame action signals (OR-merged across objects)
├── thumb.jpg # 320×180 preview rendered from depth
├── recording.viser # full 3D scene (point cloud + meshes + hands)
│
│ # Visualization (lossy, fast for streaming)
├── depth.mp4 # MoGe-2 depth, inferno colormap
├── flow.mp4 # MEMFOF optical flow visualization
├── mask.mp4 # SAM-tracked object cutout × original RGB
│
│ # Hand reconstruction (lossless)
├── hand_joints.bin # (T, H, 21, 3) float32; 3D joint positions
├── hand_verts.bin # (T, H, 778, 3) float32; baked MANO vertices
├── hand_faces.bin # (F, 3) uint16; MANO topology
├── hand_meta.json # bone connectivity + helper metadata
│
│ # Object reconstruction (lossless)
├── object_pose.bin # (T, N_obj, 4, 4) float32; per-frame 6DoF
├── object_obb.bin # (N_obj, 8, 3) float32; first-valid-frame OBB
├── objects/obj_N.ply # SAM3D point cloud per object
│
│ # Raw arrays (lossless, downstream-ready)
├── depth.npz # (T, H, W) uint16 mm; lossless depth
├── masks.npz # per-object packed-bit SAM masks
├── bg_template.png # uint16-mm PNG; bg depth template
└── pose_track.json # full per-object tracker timeseries
```
## Downloading
This dataset ships per-clip directories of mp4 / npz / bin / ply / json
files — it is **not** a tabular dataset. The HF auto-loader (`load_dataset(...)`)
will fail because the per-file JSON schemas are intentionally heterogeneous
(`scene.json`, `signals.json`, `hand_meta.json`, etc. each describe a
different aspect of the clip). Use `snapshot_download` instead:
```python
from huggingface_hub import snapshot_download
root = snapshot_download(
repo_id="Rice-RobotPI-Lab/egoinfinity",
repo_type="dataset",
# Optional: pull only what you need.
# allow_patterns=["samples/index.json", "samples/<clip_id>/*"],
)
# root / "samples" / "<clip_id>" now has all assets for that clip.
```
To grab a single clip:
```python
from huggingface_hub import hf_hub_download
hf_hub_download(repo_id="Rice-RobotPI-Lab/egoinfinity",
repo_type="dataset",
filename="samples/<clip_id>/scene.json")
```
## Loading raw arrays
```python
import numpy as np, cv2, json
# Depth (uint16 mm → meters). Sentinel 0 = absent / NaN.
depth = np.load("depth.npz")["depth"] # (T, H, W) uint16
depth_m = depth.astype(np.float32) / 1000.0
# Per-object SAM masks (packed bits per frame per object).
m = np.load("masks.npz")
T, H, W = m["_shape"]
oids = m["_oids"] # ordered object ids
def mask_for(oid: int, t: int) -> np.ndarray:
bits = np.unpackbits(m[f"oid_{oid}"][t])[: H * W]
return bits.reshape(H, W).astype(bool)
# Background depth template (rest scene) → meters
bg = cv2.imread("bg_template.png", cv2.IMREAD_UNCHANGED).astype(np.float32) / 1000.0
# Per-object tracker state: contact_soft, grasp_soft, motion, trust, chamfer,
# scale_correction, obs_obb_per_frame, etc. Keyed by str(oid).
pti = json.load(open("pose_track.json"))
# Per-frame 6DoF object pose (camera frame), (T, N_obj, 4, 4) float32
N_obj = len(json.load(open("scene.json"))["reconstruction"]["objects"])
poses = np.fromfile("object_pose.bin", dtype=np.float32).reshape(-1, N_obj, 4, 4)
```
> **Note:** original RGB frames are not redistributed. Anything that needs
> the source pixels (re-running SAM3 detect, SAM2 track, MEMFOF flow, or
> SAM3D mesh build) cannot be done from this dataset alone. Algorithms that
> consume `(depth, masks, hand_*, mesh, pose, bg_template)` (grasp / contact
> classification, state-machine tuning, ICP-based pose refinement) work
> standalone.
`<clip_id>` is `<youtube_video_id>_<start_sec>_<end_sec>`. The only original
YouTube pixels that appear in this repository are inside the SAM-tracked
object region of `mask.mp4` (everything outside the mask is painted black);
no full source frames are redistributed.
## License
This dataset is released under the FAIR Noncommercial Research License v1
(see [LICENSE-Action100M](LICENSE-Action100M)) for **noncommercial research
use only**. Per Section 1.b.ii, redistribution must include a copy of this
license file.
### Attribution
- **Source clips** are from [Action100M] (Meta FAIR). Full source videos
remain on YouTube; only the SAM-tracked region appears in `mask.mp4` as
a per-frame cutout.
- **Depth maps** were generated using MoGe-2.
- **Optical flow** was computed using MEMFOF.
- **Object segmentation** uses Meta SAM-3 / SAM-3D.
- **Hand parameters** were estimated using a WiLoR-based pipeline. Vertex
positions are baked from the MANO model (Romero et al., 2017); MANO weights
are not redistributed.
## Citation
```bibtex
@misc{egoinfinity2026,
title = {EgoInfinity: A Web-Scale Data Engine for Video-to-Action Robot Learning through Egocentric Views},
author = {Rice Robot Perception \& Intelligence Lab},
year = {2026},
note = {Preview release}
}
```
|