The dataset viewer is not available for this split.
Error code: StreamingRowsError
Exception: CastError
Message: Couldn't cast
session_id: string
total_duration_s: double
n_input_spans: int64
session_goal: string
sub_goals: list<item: struct<sub_goal_id: int64, start_time: double, end_time: double, description: string, epi (... 138 chars omitted)
child 0, item: struct<sub_goal_id: int64, start_time: double, end_time: double, description: string, episodes: list (... 126 chars omitted)
child 0, sub_goal_id: int64
child 1, start_time: double
child 2, end_time: double
child 3, description: string
child 4, episodes: list<item: struct<episode_id: int64, start_time: double, end_time: double, description: string, span (... 29 chars omitted)
child 0, item: struct<episode_id: int64, start_time: double, end_time: double, description: string, span_indices: l (... 17 chars omitted)
child 0, episode_id: int64
child 1, start_time: double
child 2, end_time: double
child 3, description: string
child 4, span_indices: list<item: int64>
child 0, item: int64
rgb: struct<width: int64, height: int64, distortion_model: string, files: struct<K: string, D: string>>
child 0, width: int64
child 1, height: int64
child 2, distortion_model: string
child 3, files: struct<K: string, D: string>
child 0, K: string
child 1, D: string
depth: struct<width: int64, height: int64, distortion_model: string, files: struct<K: string, D: string>, u (... 13 chars omitted)
child 0, width: int64
child 1, height: int64
child 2, distortion_model: string
child 3, files: struct<K: string, D: string>
child 0, K: string
child 1, D: string
child 4, units: string
R_optical_to_link: string
to
{'rgb': {'width': Value('int64'), 'height': Value('int64'), 'distortion_model': Value('string'), 'files': {'K': Value('string'), 'D': Value('string')}}, 'depth': {'width': Value('int64'), 'height': Value('int64'), 'distortion_model': Value('string'), 'files': {'K': Value('string'), 'D': Value('string')}, 'units': Value('string')}, 'R_optical_to_link': Value('string')}
because column names don't match
Traceback: Traceback (most recent call last):
File "/src/services/worker/src/worker/utils.py", line 99, in get_rows_or_raise
return get_rows(
^^^^^^^^^
File "/src/libs/libcommon/src/libcommon/utils.py", line 272, in decorator
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/src/services/worker/src/worker/utils.py", line 77, in get_rows
rows_plus_one = list(itertools.islice(ds, rows_max_number + 1))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2690, in __iter__
for key, example in ex_iterable:
^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2227, in __iter__
for key, pa_table in self._iter_arrow():
^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2251, in _iter_arrow
for key, pa_table in self.ex_iterable._iter_arrow():
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 494, in _iter_arrow
for key, pa_table in iterator:
^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 384, in _iter_arrow
for key, pa_table in self.generate_tables_fn(**gen_kwags):
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/json/json.py", line 299, in _generate_tables
self._cast_table(pa_table, json_field_paths=json_field_paths),
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/json/json.py", line 128, in _cast_table
pa_table = table_cast(pa_table, self.info.features.arrow_schema)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2321, in table_cast
return cast_table_to_schema(table, schema)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2249, in cast_table_to_schema
raise CastError(
datasets.table.CastError: Couldn't cast
session_id: string
total_duration_s: double
n_input_spans: int64
session_goal: string
sub_goals: list<item: struct<sub_goal_id: int64, start_time: double, end_time: double, description: string, epi (... 138 chars omitted)
child 0, item: struct<sub_goal_id: int64, start_time: double, end_time: double, description: string, episodes: list (... 126 chars omitted)
child 0, sub_goal_id: int64
child 1, start_time: double
child 2, end_time: double
child 3, description: string
child 4, episodes: list<item: struct<episode_id: int64, start_time: double, end_time: double, description: string, span (... 29 chars omitted)
child 0, item: struct<episode_id: int64, start_time: double, end_time: double, description: string, span_indices: l (... 17 chars omitted)
child 0, episode_id: int64
child 1, start_time: double
child 2, end_time: double
child 3, description: string
child 4, span_indices: list<item: int64>
child 0, item: int64
rgb: struct<width: int64, height: int64, distortion_model: string, files: struct<K: string, D: string>>
child 0, width: int64
child 1, height: int64
child 2, distortion_model: string
child 3, files: struct<K: string, D: string>
child 0, K: string
child 1, D: string
depth: struct<width: int64, height: int64, distortion_model: string, files: struct<K: string, D: string>, u (... 13 chars omitted)
child 0, width: int64
child 1, height: int64
child 2, distortion_model: string
child 3, files: struct<K: string, D: string>
child 0, K: string
child 1, D: string
child 4, units: string
R_optical_to_link: string
to
{'rgb': {'width': Value('int64'), 'height': Value('int64'), 'distortion_model': Value('string'), 'files': {'K': Value('string'), 'D': Value('string')}}, 'depth': {'width': Value('int64'), 'height': Value('int64'), 'distortion_model': Value('string'), 'files': {'K': Value('string'), 'D': Value('string')}, 'units': Value('string')}, 'R_optical_to_link': Value('string')}
because column names don't matchNeed help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
Stera-10M
Visualizer: https://platform.fpvlabs.ai/dataset/stera-10m/viz

Dataset Summary
Stera-10M is an open egocentric multimodal dataset for embodied AI, robotics, world models, and spatial intelligence, captured end-to-end on commodity iPhone Pro hardware through the open Stera platform.
It contains 200 hours of synchronized first-person recordings across 500+ sessions from 20 contributors in 20+ unique environments, with 10 million RGB frames, LiDAR depth, ARKit 6-DoF camera pose, IMU, 21-joint MANO two-hand mocap, room mesh geometry, and hierarchical action language.
Stera-10M is, to our knowledge, the first open egocentric dataset combining hour-plus continuous capture with depth, 6-DoF pose, and dense hand annotations collected and distributed on commodity hardware, which a researcher already owns.
Stera-10M is built for training and evaluating models that learn from real human work as a continuous, geometrically grounded, multimodal signal, not as short, isolated clips.
It is designed to support research and product development in:
- Vision-language-action (VLA) pretraining
- Embodied AI and world modeling
- Robot learning from human demonstration
- Long-horizon manipulation and planning
- Egocentric perception
- Action understanding at multiple temporal scales
- Hand-object interaction
- 3D / 4D scene and motion understanding
- Real-to-sim and sim-to-real pipelines
Check out the Stera Capture iOS app to record your own sessions in the same format. Check out the Stera SDK for the open pipeline used to process every recording in this dataset.
Dataset Statistics
| Statistic | Value |
|---|---|
| Total recording duration | 200 h |
| Sessions | 584 |
| Mean session length | 20.5 min |
| Longest continuous session | 104 min |
| RGB frames | ~10M |
| Atomic action spans | 75,857 |
| Episodes | 9,922 |
| Sub-goals | 2,212 |
| Total dataset size | 1,600 GB |
What makes Stera-10M different
Most existing egocentric datasets present a trade-off. Large datasets like Ego4D rely on heterogeneous capture hardware and lack consistent depth, 6-DoF pose, or hand annotations. High-fidelity datasets such as EgoExo4D and Aria Everyday Activities require gated, research-grade hardware (Project Aria) and involve short episode lengths. Production-quality manipulation datasets like HOI4D, HOT3D, and ARCTIC are limited to short clips in constrained settings.
Stera-10M is designed differently. It treats egocentric capture as a long-horizon, multimodal, fully open signal recorded on hardware anyone can buy. Each session includes:
- what the wearer sees
- how the camera moves through space (6-DoF pose)
- how the hands move (21-joint MANO, anchored in a global frame)
- what the depth geometry looks like (per-frame LiDAR depth and a session-level room mesh)
- what the IMU measures
- what task, sub-goal, episode, atomic action, and objects are involved (hierarchical instruction tree)
This makes Stera-10M especially useful for labs that need geometric grounding across hour-plus continuous activity, and for any researcher who wants to extend the corpus with their own captures using the same open stack.
Supported Tasks and Use Cases
Stera-10M can support a broad range of tasks, including but not limited to:
- Vision-Language-Action model pretraining
- Imitation learning and behavior cloning
- Long-horizon manipulation policy learning
- Egocentric action recognition
- Temporal action localization and segmentation
- Action captioning at multiple temporal scales
- Task and sub-goal prediction
- Hand-object interaction understanding
- Human-to-robot motion retargeting
- Hand pose estimation
- Visual odometry and trajectory learning
- SLAM and camera pose estimation
- Visual-language pretraining
- World model training
- Real-to-sim environment reconstruction
Dataset Structure
Stera-10M is organized as a collection of sessions. Each session is a self-contained directory containing the egocentric RGB recording and a unified annotation.hdf5 file, the room mesh, calibrations, and rerun-sdk visualization recordings.
Session Layout
A typical session directory contains:
<session_id>/
βββ rgb.mp4 # 1280Γ720 @ 15 fps, H.264, PII encrypted
βββ annotation.hdf5 # all per-frame and sparse annotations
βββ mesh.ply # room mesh (~45k verts)
βββ thumbnail.jpg
βββ calibrations/
βββ visualization.rrd # full-session rerun recording
βββ visualization_2min.rrd # 2-min preview rerun recording
Modalities
Each session include the following modalities:
- RGB video from a head-mounted iPhone Pro (1280Γ720 @ 15 fps)
- LiDAR depth aligned to RGB (144Γ256, uint16 mm, per frame)
- ARKit 6-DoF camera pose (rotation, translation, timestamp, per frame)
- IMU (linear acceleration, angular velocity, orientation quaternion at ~100 Hz)
- Two-hand mocap (21-joint MANO, anchored in the global frame, with MANO betas and finger rotations)
- Room mesh (~45k vertices per session)
- Session metadata (frame counts, duration, start/end times)
- Hierarchical language captions at four temporal levels:
- session
- sub-goal
- episode
- atomic action
The Stera Platform
Stera-10M is the launch corpus for the Stera platform, an open, full-stack system for capturing, processing, evaluating, and exporting embodied AI data on commodity hardware. Every component used to produce this dataset is open and reproducible:
- Stera Capture: an iOS app that turns an iPhone Pro into a synchronized RGB / depth / IMU / 6-DoF pose recorder, voice-controlled and writing to MCAP
- Stera SDK: the processing pipeline used to generate every annotation in this dataset (trajectory, hand pose, hierarchical labels, upper-body tracking, mesh)
- Stera Evaluate: the quality layer that scores tracking drift, hand-detection rate, label coverage, and instruction-tree integrity
- Stera Export: converters to standard HDF5, PLY, RRDs, etc
Researchers who want to extend Stera-10M, capture in domains we have not covered, or build entirely new datasets in this format can do so using the same open stack.
Links:
- iOS app: fpvlabs.ai/app
- SDK: github.com/fpv-labs/stera-sdk
- Project page: fpvlabs.ai/stera
- Docs: fpvlabs.ai/stera/docs
- Paper: arXiv:2605.05945
Social Impact
Stera-10M can help advance embodied AI, robot learning, world models, and assistive systems by making research-grade egocentric data and the means to produce it, accessible to any researcher with a consumer iPhone.
At the same time, egocentric multimodal data raises important questions around privacy, consent, and downstream misuse. We encourage all users to work with the dataset responsibly and to align usage with privacy protection, human-centered AI principles, and beneficial real-world applications.
Privacy, Ethics, and Consent
Because Stera-10M contains egocentric recordings of real-world human activity, privacy and consent are central considerations.
All contributors signed informed-consent agreements prior to recording. Recording was paused around non-consenting individuals. Incidentally captured faces are blurred via Stera SDK prior to release. Personally identifying or sensitive content is handled according to the dataset release policy.
Access
Stera-10M is released for research/commercial use and other non-commercial uses under the Creative Commons Attribution-NonCommercial 4.0 International (CC BY-NC 4.0) license with a carve out for WiLoR & MANO licences which are subject to separate licence terms.
Please make sure you understand the non-commercial & attribution carve outs as specified above by reading the complete licence. Licence here
Citation
@article{palanisamy2026mobileego,
title = {MobileEgo Anywhere: Open Infrastructure for long horizon egocentric data on commodity hardware},
author = {Palanisamy, Senthil and Anand, Abhishek and Rathore, Satpal Singh and Patnaik, Pratyush and Khatana, Shubhanshu},
journal = {arXiv preprint arXiv:2605.05945},
year = {2026}
}
Please also credit WiLoR (Potamias et al., 2024), MANO (Romero et al., 2017), EgoBlur, and rerun-sdk.
Contact
FPV Labs - contact@fpvlabs.ai
- Downloads last month
- 160