Datasets:
video
video |
|---|
Fidelity Data Factory – Egocentric State–Action Transitions (v0)
This repository contains an initial release of structured state–action–state′ transitions extracted from real-world egocentric video.
The goal of this dataset is to provide early infrastructure for learning dynamics and representations from large-scale human activity data.
Overview
Each data point is a short temporal transition of the form:
(s_t, a_t, s_{t+1})
Transitions are derived from monocular egocentric footage recorded in real factory environments.
This release does not include robot-specific signals such as torques or joint states, and is intended for research and exploration rather than deployment.
Data Contents
- ~200k+ transitions
- Egocentric (head / chest-mounted) viewpoint
- Real industrial environments
Transitions are stored in JSONL format.
Schema (Simplified)
Each record contains:
s:ego_poseego_velocityhand_stateentities(objects with image-space location)meta(video id, timestamp)
a:ego_deltahand_deltainteraction_delta
s_prime:- Same structure as
s, representing the next timestep
- Same structure as
See schema.json for full details.
Intended Use
This dataset may be useful for:
- World model research
- Offline RL
- Vision–language–action pretraining
- Learning dynamics from human activity
- Representation learning from egocentric video
Limitations
- Monocular video only
- No force / torque signals
- No task labels
- Contains estimation noise
Credits
Original video data provided by BuildAI.
Enrichment and processing by Fidelity Dynamics.
- Downloads last month
- 75