Human Archive
Human Archive is modeling human sensorimotor intelligence at scale. We collect up to 50,000+ hours of this egocentric dataset per week, making HA-Ego one of the largest and most diverse egocentric datasets available.
We’re backed by Y Combinator and engineers from OpenAI, BAIR, SAIL, Anduril Industries, Mercor, NVIDIA, Jane Street, Google, DoorDash AI Research, Reevo, AfterQuery, and the investors behind AMI Labs.
Follow us on X: https://x.com/babugi28
To purchase the full dataset, find time here (https://cal.com/human-archive-0mw2ab/30min?user=human-archive-0mw2ab)
HA-Ego-Samples
A large-scale egocentric (first-person) video dataset capturing everyday human activities across commercial, residential, and industrial environments. Designed for robotics and manipulation research, the dataset features diverse tasks performed by thousands of unique individuals wearing head-mounted cameras.
Dataset Statistics
| Metric | Value |
|---|---|
| Total Duration | 500.0 hours |
| Total Frames | 53,828,712 |
| Video Clips | 9,343 |
| Unique Persons | 4,651 |
| Task Categories | 259 |
| Environment Types | 45 |
| Mean Clip Length | 192.6 seconds |
| Storage Size | 2.05 TB |
| Encoding | H.264 (AVC) |
| Container | MP4 |
| Resolution | 1920 x 1080 |
| Frame Rate | 30 fps |
| Camera | Monocular head-mounted |
| Audio | No |
Dataset Structure
HA-Ego-Samples/
├── commercial/
│ ├── factory/
│ │ ├── person1/
│ │ │ ├── person1_segment1.mp4
│ │ │ ├── person1_segment2.mp4
│ │ │ └── person1_segments.json
│ │ ├── person2/
│ │ └── ...
│ └── hospitality/
│ ├── person1373/
│ └── ...
├── residential/
│ ├── person4116/
│ └── ...
└── README.md
The dataset is organized into three top-level categories:
commercial/factory/— Activities in factory and manufacturing environments (persons 1–1,372)commercial/hospitality/— Activities in kitchens, restaurants, and hospitality settings (persons 1,373–4,115)residential/— Activities in home and residential environments (persons 4,116–4,651)
Each person folder contains one or more MP4 video segments and a single segments.json metadata file.
Metadata Format (segments.json)
Each person directory includes a {person_id}_segments.json file with the following schema:
{
"person_id": "person1",
"total_segments": 1,
"total_duration_sec": 592.0,
"segments": [
{
"person_id": "person1",
"video_index": "segment1",
"duration_sec": 592.0,
"task": "operating_machine",
"environment": "factory_floor",
"width": 1920,
"height": 1080,
"fps": 30.0,
"size_bytes": 727529626,
"codec": "h264"
}
]
}
Field Descriptions
| Field | Type | Description |
|---|---|---|
person_id |
string | Unique person identifier (e.g., "person1") |
total_segments |
int | Number of video segments for this person |
total_duration_sec |
float | Total duration across all segments (seconds) |
segments |
list | Array of per-segment metadata |
segments[].video_index |
string | Segment identifier (e.g., "segment1") |
segments[].duration_sec |
float | Duration of this segment (seconds) |
segments[].task |
string | Activity label (e.g., "cutting_vegetables") |
segments[].environment |
string | Environment label (e.g., "kitchen") |
segments[].width |
int | Video width in pixels |
segments[].height |
int | Video height in pixels |
segments[].fps |
float | Frame rate |
segments[].size_bytes |
int | File size in bytes |
segments[].codec |
string | Video codec (always "h264") |
Camera Intrinsics
All videos were recorded with the same head-mounted camera model. The calibrated intrinsic parameters are:
Intrinsic Matrix (K)
K = [[4425.0857, 0.0000, 974.5921],
[ 0.0000, 4384.7678, 522.1587],
[ 0.0000, 0.0000, 1.0000]]
| Parameter | Value |
|---|---|
| fx | 4425.0857 |
| fy | 4384.7678 |
| cx | 974.5921 |
| cy | 522.1587 |
Distortion Coefficients (Brown–Conrady model)
dist = [-6.4654, 130.2946, -0.0033, 0.0356, -1119.5408]
| Coefficient | Value |
|---|---|
| k1 | -6.4654 |
| k2 | 130.2946 |
| p1 | -0.0033 |
| p2 | 0.0356 |
| k3 | -1119.5408 |
To undistort frames using OpenCV:
import cv2
import numpy as np
K = np.array([[4425.0857, 0.0, 974.5921],
[0.0, 4384.7678, 522.1587],
[0.0, 0.0, 1.0]])
dist = np.array([-6.4654, 130.2946, -0.0033, 0.0356, -1119.5408])
frame = cv2.imread("frame.png")
undistorted = cv2.undistort(frame, K, dist)
Loading the Dataset
Using Hugging Face datasets
from datasets import load_dataset
# Load metadata only (fast — no video download)
ds = load_dataset("humanarchive/HA-Ego-Samples", split="train")
Streaming Individual Videos
from huggingface_hub import hf_hub_download
# Download a specific video clip
path = hf_hub_download(
repo_id="humanarchive/HA-Ego-Samples",
filename="commercial/factory/person1/person1_segment1.mp4",
repo_type="dataset",
)
Loading Metadata for a Person
import json
from huggingface_hub import hf_hub_download
path = hf_hub_download(
repo_id="humanarchive/HA-Ego-Samples",
filename="commercial/factory/person1/person1_segments.json",
repo_type="dataset",
)
with open(path) as f:
meta = json.load(f)
print(f"Person: {meta['person_id']}")
print(f"Segments: {meta['total_segments']}")
for seg in meta["segments"]:
print(f" {seg['video_index']}: {seg['task']} in {seg['environment']} ({seg['duration_sec']}s)")
Batch Processing with Video Decoding
import cv2
from huggingface_hub import hf_hub_download
video_path = hf_hub_download(
repo_id="humanarchive/HA-Ego-Samples",
filename="commercial/hospitality/person2000/person2000_segment1.mp4",
repo_type="dataset",
)
cap = cv2.VideoCapture(video_path)
while cap.isOpened():
ret, frame = cap.read()
if not ret:
break
# Process frame (1920x1080, BGR)
pass
cap.release()
Cloning the Full Dataset
For bulk access, clone the repository using Git LFS:
# Make sure Git LFS is installed
git lfs install
# Clone (this will download ~2 TB of video data)
git clone https://huggingface.co/datasets/humanarchive/HA-Ego-Samples
License
This dataset is released under the Apache License 2.0.
Citation
@dataset{ha_ego_samples_2026,
title={HA-Ego-Samples: A Large-Scale Egocentric Video Dataset for Robotics},
author={Human Archive},
year={2026},
url={https://huggingface.co/datasets/humanarchive/HA-Ego-Samples}
}
- Downloads last month
- 139