VectorW commited on
Commit
bc55827
·
verified ·
1 Parent(s): caa2038

README: paper title in citation, GitHub link, drop em-dashes

Browse files
Files changed (1) hide show
  1. README.md +63 -10
README.md CHANGED
@@ -19,6 +19,8 @@ clips. Used as the data backend for the
19
  [EgoInfinity Browser](https://huggingface.co/spaces/Rice-RobotPI-Lab/egoinfinity)
20
  Space.
21
 
 
 
22
  [Action100M]: https://github.com/facebookresearch/Action100M
23
 
24
  ## Contents
@@ -27,19 +29,70 @@ Space.
27
  samples/
28
  ├── index.json # browse-time episode list (consumed by the Space)
29
  └── <clip_id>/
30
- ├── scene.json # camera intrinsics, object metadata, durations
31
- ├── signals.json # per-frame action signals (timeseries)
32
  ├── thumb.jpg # 320×180 preview rendered from depth
33
- ├── depth.mp4 # MoGe-2 depth, inferno colormap (854×480)
34
- ├── flow.mp4 # MEMFOF optical flow visualization
35
- ├── mask.mp4 # SAM-tracked object mask cutout
36
  ├── recording.viser # full 3D scene (point cloud + meshes + hands)
37
- ├── hand_joints.bin # (T, H, 21, 3) float32 — 3D joint positions
38
- ├── hand_verts.bin # (T, H, 778, 3) float32 — baked MANO vertices
39
- ├── hand_faces.bin # (F, 3) uint16 — MANO topology
40
- ── hand_meta.json # bone connectivity + helper metadata
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
41
  ```
42
 
 
 
 
 
 
 
 
43
  `<clip_id>` is `<youtube_video_id>_<start_sec>_<end_sec>`. The only original
44
  YouTube pixels that appear in this repository are inside the SAM-tracked
45
  object region of `mask.mp4` (everything outside the mask is painted black);
@@ -68,7 +121,7 @@ license file.
68
 
69
  ```bibtex
70
  @misc{egoinfinity2026,
71
- title = {EgoInfinity: TBD},
72
  author = {Rice Robot Perception \& Intelligence Lab},
73
  year = {2026},
74
  note = {Preview release}
 
19
  [EgoInfinity Browser](https://huggingface.co/spaces/Rice-RobotPI-Lab/egoinfinity)
20
  Space.
21
 
22
+ Source code: <https://github.com/Rice-RobotPI-Lab/EgoInfinity>
23
+
24
  [Action100M]: https://github.com/facebookresearch/Action100M
25
 
26
  ## Contents
 
29
  samples/
30
  ├── index.json # browse-time episode list (consumed by the Space)
31
  └── <clip_id>/
32
+ ├── scene.json # camera intrinsics, object metadata, asset paths
33
+ ├── signals.json # per-frame action signals (OR-merged across objects)
34
  ├── thumb.jpg # 320×180 preview rendered from depth
 
 
 
35
  ├── recording.viser # full 3D scene (point cloud + meshes + hands)
36
+
37
+ # Visualization (lossy, fast for streaming)
38
+ ├── depth.mp4 # MoGe-2 depth, inferno colormap
39
+ ── flow.mp4 # MEMFOF optical flow visualization
40
+ ├── mask.mp4 # SAM-tracked object cutout × original RGB
41
+
42
+ │ # Hand reconstruction (lossless)
43
+ ├── hand_joints.bin # (T, H, 21, 3) float32; 3D joint positions
44
+ ├── hand_verts.bin # (T, H, 778, 3) float32; baked MANO vertices
45
+ ├── hand_faces.bin # (F, 3) uint16; MANO topology
46
+ ├── hand_meta.json # bone connectivity + helper metadata
47
+
48
+ │ # Object reconstruction (lossless)
49
+ ├── object_pose.bin # (T, N_obj, 4, 4) float32; per-frame 6DoF
50
+ ├── object_obb.bin # (N_obj, 8, 3) float32; first-valid-frame OBB
51
+ ├── objects/obj_N.ply # SAM3D point cloud per object
52
+
53
+ │ # Raw arrays (lossless, downstream-ready)
54
+ ├── depth.npz # (T, H, W) uint16 mm; lossless depth
55
+ ├── masks.npz # per-object packed-bit SAM masks
56
+ ├── bg_template.png # uint16-mm PNG; bg depth template
57
+ └── pose_track.json # full per-object tracker timeseries
58
+ ```
59
+
60
+ ## Loading raw arrays
61
+
62
+ ```python
63
+ import numpy as np, cv2, json
64
+
65
+ # Depth (uint16 mm → meters). Sentinel 0 = absent / NaN.
66
+ depth = np.load("depth.npz")["depth"] # (T, H, W) uint16
67
+ depth_m = depth.astype(np.float32) / 1000.0
68
+
69
+ # Per-object SAM masks (packed bits per frame per object).
70
+ m = np.load("masks.npz")
71
+ T, H, W = m["_shape"]
72
+ oids = m["_oids"] # ordered object ids
73
+ def mask_for(oid: int, t: int) -> np.ndarray:
74
+ bits = np.unpackbits(m[f"oid_{oid}"][t])[: H * W]
75
+ return bits.reshape(H, W).astype(bool)
76
+
77
+ # Background depth template (rest scene) → meters
78
+ bg = cv2.imread("bg_template.png", cv2.IMREAD_UNCHANGED).astype(np.float32) / 1000.0
79
+
80
+ # Per-object tracker state: contact_soft, grasp_soft, motion, trust, chamfer,
81
+ # scale_correction, obs_obb_per_frame, etc. Keyed by str(oid).
82
+ pti = json.load(open("pose_track.json"))
83
+
84
+ # Per-frame 6DoF object pose (camera frame), (T, N_obj, 4, 4) float32
85
+ N_obj = len(json.load(open("scene.json"))["reconstruction"]["objects"])
86
+ poses = np.fromfile("object_pose.bin", dtype=np.float32).reshape(-1, N_obj, 4, 4)
87
  ```
88
 
89
+ > **Note:** original RGB frames are not redistributed. Anything that needs
90
+ > the source pixels (re-running SAM3 detect, SAM2 track, MEMFOF flow, or
91
+ > SAM3D mesh build) cannot be done from this dataset alone. Algorithms that
92
+ > consume `(depth, masks, hand_*, mesh, pose, bg_template)` (grasp / contact
93
+ > classification, state-machine tuning, ICP-based pose refinement) work
94
+ > standalone.
95
+
96
  `<clip_id>` is `<youtube_video_id>_<start_sec>_<end_sec>`. The only original
97
  YouTube pixels that appear in this repository are inside the SAM-tracked
98
  object region of `mask.mp4` (everything outside the mask is painted black);
 
121
 
122
  ```bibtex
123
  @misc{egoinfinity2026,
124
+ title = {EgoInfinity: A Web-Scale Data Engine for Video-to-Action Robot Learning through Egocentric Views},
125
  author = {Rice Robot Perception \& Intelligence Lab},
126
  year = {2026},
127
  note = {Preview release}