egoinfinity / README.md
VectorW's picture
recording.viser: include sam3d object meshes (fix stale ply_path); README: viewer:false + snapshot_download docs
618ee52 verified
metadata
license: other
license_name: fair-noncommercial-research-license-v1
license_link: >-
  https://huggingface.co/datasets/Rice-RobotPI-Lab/egoinfinity/blob/main/LICENSE-Action100M
pretty_name: EgoInfinity
viewer: false
tags:
  - egocentric
  - hand-tracking
  - 3d-scene
  - video
  - action-recognition
  - derivative-of-action100m

EgoInfinity Dataset

Derivative scene assets for a curated subset of Action100M (Meta FAIR) clips. Used as the data backend for the EgoInfinity Browser Space.

Source code: https://github.com/Rice-RobotPI-Lab/EgoInfinity

Contents

samples/
├── index.json                 # browse-time episode list (consumed by the Space)
└── <clip_id>/
    ├── scene.json             # camera intrinsics, object metadata, asset paths
    ├── signals.json           # per-frame action signals (OR-merged across objects)
    ├── thumb.jpg              # 320×180 preview rendered from depth
    ├── recording.viser        # full 3D scene (point cloud + meshes + hands)
    │
    │  # Visualization (lossy, fast for streaming)
    ├── depth.mp4              # MoGe-2 depth, inferno colormap
    ├── flow.mp4               # MEMFOF optical flow visualization
    ├── mask.mp4               # SAM-tracked object cutout × original RGB
    │
    │  # Hand reconstruction (lossless)
    ├── hand_joints.bin        # (T, H, 21, 3) float32; 3D joint positions
    ├── hand_verts.bin         # (T, H, 778, 3) float32; baked MANO vertices
    ├── hand_faces.bin         # (F, 3) uint16; MANO topology
    ├── hand_meta.json         # bone connectivity + helper metadata
    │
    │  # Object reconstruction (lossless)
    ├── object_pose.bin        # (T, N_obj, 4, 4) float32; per-frame 6DoF
    ├── object_obb.bin         # (N_obj, 8, 3) float32; first-valid-frame OBB
    ├── objects/obj_N.ply      # SAM3D point cloud per object
    │
    │  # Raw arrays (lossless, downstream-ready)
    ├── depth.npz              # (T, H, W) uint16 mm; lossless depth
    ├── masks.npz              # per-object packed-bit SAM masks
    ├── bg_template.png        # uint16-mm PNG; bg depth template
    └── pose_track.json        # full per-object tracker timeseries

Downloading

This dataset ships per-clip directories of mp4 / npz / bin / ply / json files — it is not a tabular dataset. The HF auto-loader (load_dataset(...)) will fail because the per-file JSON schemas are intentionally heterogeneous (scene.json, signals.json, hand_meta.json, etc. each describe a different aspect of the clip). Use snapshot_download instead:

from huggingface_hub import snapshot_download
root = snapshot_download(
    repo_id="Rice-RobotPI-Lab/egoinfinity",
    repo_type="dataset",
    # Optional: pull only what you need.
    # allow_patterns=["samples/index.json", "samples/<clip_id>/*"],
)
# root / "samples" / "<clip_id>" now has all assets for that clip.

To grab a single clip:

from huggingface_hub import hf_hub_download
hf_hub_download(repo_id="Rice-RobotPI-Lab/egoinfinity",
                repo_type="dataset",
                filename="samples/<clip_id>/scene.json")

Loading raw arrays

import numpy as np, cv2, json

# Depth (uint16 mm → meters). Sentinel 0 = absent / NaN.
depth = np.load("depth.npz")["depth"]                # (T, H, W) uint16
depth_m = depth.astype(np.float32) / 1000.0

# Per-object SAM masks (packed bits per frame per object).
m = np.load("masks.npz")
T, H, W = m["_shape"]
oids = m["_oids"]                                    # ordered object ids
def mask_for(oid: int, t: int) -> np.ndarray:
    bits = np.unpackbits(m[f"oid_{oid}"][t])[: H * W]
    return bits.reshape(H, W).astype(bool)

# Background depth template (rest scene) → meters
bg = cv2.imread("bg_template.png", cv2.IMREAD_UNCHANGED).astype(np.float32) / 1000.0

# Per-object tracker state: contact_soft, grasp_soft, motion, trust, chamfer,
# scale_correction, obs_obb_per_frame, etc. Keyed by str(oid).
pti = json.load(open("pose_track.json"))

# Per-frame 6DoF object pose (camera frame), (T, N_obj, 4, 4) float32
N_obj = len(json.load(open("scene.json"))["reconstruction"]["objects"])
poses = np.fromfile("object_pose.bin", dtype=np.float32).reshape(-1, N_obj, 4, 4)

Note: original RGB frames are not redistributed. Anything that needs the source pixels (re-running SAM3 detect, SAM2 track, MEMFOF flow, or SAM3D mesh build) cannot be done from this dataset alone. Algorithms that consume (depth, masks, hand_*, mesh, pose, bg_template) (grasp / contact classification, state-machine tuning, ICP-based pose refinement) work standalone.

<clip_id> is <youtube_video_id>_<start_sec>_<end_sec>. The only original YouTube pixels that appear in this repository are inside the SAM-tracked object region of mask.mp4 (everything outside the mask is painted black); no full source frames are redistributed.

License

This dataset is released under the FAIR Noncommercial Research License v1 (see LICENSE-Action100M) for noncommercial research use only. Per Section 1.b.ii, redistribution must include a copy of this license file.

Attribution

  • Source clips are from Action100M (Meta FAIR). Full source videos remain on YouTube; only the SAM-tracked region appears in mask.mp4 as a per-frame cutout.
  • Depth maps were generated using MoGe-2.
  • Optical flow was computed using MEMFOF.
  • Object segmentation uses Meta SAM-3 / SAM-3D.
  • Hand parameters were estimated using a WiLoR-based pipeline. Vertex positions are baked from the MANO model (Romero et al., 2017); MANO weights are not redistributed.

Citation

@misc{egoinfinity2026,
  title  = {EgoInfinity: A Web-Scale Data Engine for Video-to-Action Robot Learning through Egocentric Views},
  author = {Rice Robot Perception \& Intelligence Lab},
  year   = {2026},
  note   = {Preview release}
}