You need to agree to share your contact information to access this dataset

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this dataset content.

Human Archive

Human Archive is modeling human sensorimotor intelligence at scale. We currently collect 2,000+ hours of this multimodal dataset per week, making this the largest and first dataset of its kind.

We’re backed by Y Combinator and engineers from OpenAI, BAIR, SAIL, Anduril Industries, Mercor, NVIDIA, Jane Street, Google, DoorDash AI Research, Reevo, AfterQuery, and the investors behind AMI Labs.

Follow us on X

To purchase the full dataset, find time here

HA-Multi-Samples

A multimodal human activity dataset in LeRobot v3 containing synchronized RGB-D, tactile sensing, chest and wrist cameras, and 8 upper-body IMUs captured during household tasks.

Screenshot 2026-04-03 at 8.16.53 PM


Dataset Summary

Metric Value
Total episodes 36
Total frames 420,630
Frame rate 30 fps
Total duration 3 hours 54 minutes
Video streams 6 synchronized cameras
Sensor modalities Tactile (512 taxels), Hand and Body IMU (8)
Unique tasks 11
Unique environments 10
Cross-modal alignment < 33 ms (< 1 frame at 30 fps)
Video data size ~64 GB
Sensor data size ~66 MB (Parquet)
Average episode length 6.5 minutes
Median episode length 5.1 minutes
Shortest episode 22.3 seconds
Longest episode 21.3 minutes

Task and Environment Breakdown

By Task

Task Episodes Duration
Cleaning 19 118.9 min
Cooking 4 43.0 min
Ironing 4 31.2 min
Folding and cleaning 2 25.4 min
Folding clothes 6 14.2 min
Placing shoes 1 1.0 min

By Environment

Environment labels describe the room type, not unique rooms. Multiple episodes labeled "Bedroom" or "Kitchen" may come from different physical locations.

Environment Episodes Duration
Bedroom 20 115.8 min
Kitchen 7 61.0 min
Living room 3 32.3 min
Bathroom 3 18.3 min
Office 1 3.8 min
Hallway 2 2.5 min

File Structure

HA-Multi-Samples/
├── meta/
│   ├── info.json                                  # Dataset schema, features, and configuration
│   ├── stats.json                                 # Per-feature statistics (min, max, mean, std)
│   ├── tasks.parquet                              # Task label table (11 rows)
│   └── episodes/
│       └── chunk-000/
│           └── file-000.parquet                   # Episode metadata table (36 rows)
├── data/
│   └── chunk-000/
│       └── file-000.parquet                       # All sensor data (420,630 rows)
└── videos/
    ├── observation.images.egocentric/
    │   └── chunk-000/
    │       ├── file-000.mp4                       # Episode 0
    │       ├── file-001.mp4                       # Episode 1
    │       └── ...                                # file-{NNN}.mp4 = Episode NNN
    ├── observation.images.chest/
    │   └── chunk-000/
    │       └── file-{000-035}.mp4
    ├── observation.images.left_wrist/
    │   └── chunk-000/
    │       └── file-{000-035}.mp4
    ├── observation.images.right_wrist/
    │   └── chunk-000/
    │       └── file-{000-035}.mp4
    ├── observation.images.stereo_left/
    │   └── chunk-000/
    │       └── file-{000-035}.mp4
    └── observation.images.stereo_right/
        └── chunk-000/
            └── file-{000-035}.mp4

Each video file corresponds to one episode. Episode N maps to file-{N:03d}.mp4 across all camera streams.


Modalities

1. Video Streams (6 cameras)

All videos are H.264 encoded, 30 fps, with yuv420p pixel format.

Stream Resolution Mounting Position Notes
observation.images.egocentric 1920x1080 Head-mounted, first-person Fisheye lens, wide-angle forward view
observation.images.chest 1920x1080 Chest-mounted, downward-angled Captures hands and workspace
observation.images.left_wrist 1920x1080 Left wrist/forearm Left hand and nearby objects
observation.images.right_wrist 1920x1080 Right wrist/forearm Right hand and nearby objects
observation.images.stereo_left 1280x720 Head-mounted, downward-facing (left) Stereo pair for depth
observation.images.stereo_right 1280x720 Head-mounted, downward-facing (right) Stereo pair for depth

Egocentric (Center RGB) Camera Intrinsics

The head-mounted egocentric camera uses a fisheye lens. Intrinsics at 1920x1080:

fx = 1093.98    fy = 1093.39
cx = 953.05     cy = 536.30

Stereo Camera Intrinsics and Depth Reconstruction

The stereo pair is head-mounted and downward-facing, capturing the workspace and hands from above. The pair can be used for depth estimation. Camera parameters at the delivered 1280x720 resolution:

Left stereo camera:

fx = 566.06    fy = 566.02
cx = 640.75    cy = 400.78
Distortion model: Rational polynomial (14 coefficients)

Right stereo camera:

fx = 566.54    fy = 566.69
cx = 644.35    cy = 403.60
Distortion model: Rational polynomial (14 coefficients)

Stereo geometry:

Baseline: 74.95 mm

The center RGB camera (egocentric/chest) sits approximately centered between the stereo pair — 37.4 mm to the right of the left camera and 37.6 mm to the left of the right camera. All cameras share a common rigid mount.

To compute a disparity map and recover depth:

import cv2
import numpy as np

# Camera matrices
K_left = np.array([[566.06, 0, 640.75],
                   [0, 566.02, 400.78],
                   [0, 0, 1]])

K_right = np.array([[566.54, 0, 644.35],
                    [0, 566.69, 403.60],
                    [0, 0, 1]])

baseline_mm = 74.95

# After rectification, depth from disparity:
# depth_mm = (fx * baseline_mm) / disparity
# Using average fx ≈ 566.3:
# depth_mm = (566.3 * 74.95) / disparity

2. Tactile Sensors (256 taxels per hand)

Each hand is equipped with a full-coverage tactile glove containing 256 fiber-optic pressure sensors (taxels). The sensors use fiber-optic technology — light intensity through flexible optical fibers changes under mechanical pressure, providing responsive and high-dynamic-range force sensing across the entire hand surface. Values are unsigned 8-bit integers (0–255), stored as float32 in the dataset.

Feature Shape Description
observation.tactile.left (256,) Left hand tactile pressure array
observation.tactile.right (256,) Right hand tactile pressure array

Pressure value ranges:

Contact Type Typical Range
No contact 0
Light touch 1–5
Moderate grip 10–35
Hard press/grip 40–105
Sensor maximum 255

Approximately 60 of the 256 taxels are active during a typical grip. Some taxels may read zero consistently due to sensor placement or contact geometry.

Taxel Layout

The 256-byte array is formed by concatenating two 128-byte packets from the glove hardware. Indices 0–127 correspond to the first packet and indices 128–255 to the second.

The taxels map to the hand as follows:

Finger sensors (60 taxels per hand): Each finger has 4 phalanges with 3 taxels each (12 taxels per finger, 5 fingers).

Left hand finger mapping (256-byte array indices):

Finger Phalanx 1 (fingertip) Phalanx 2 (mid-distal) Phalanx 3 (mid-proximal) Phalanx 4 (base)
Thumb 30, 29, 28 14, 13, 12 254, 253, 252 238, 237, 236
Index 27, 26, 25 11, 10, 9 251, 250, 249 235, 234, 233
Middle 24, 23, 22 8, 7, 6 248, 247, 246 232, 231, 230
Ring 21, 20, 19 5, 4, 3 245, 244, 243 229, 228, 227
Pinky 18, 17, 16 2, 1, 0 242, 241, 240 226, 225, 224

Right hand finger mapping (256-byte array indices):

Finger Phalanx 1 (fingertip) Phalanx 2 (mid-distal) Phalanx 3 (mid-proximal) Phalanx 4 (base)
Thumb 239, 238, 237 255, 254, 253 15, 14, 13 31, 30, 29
Index 236, 235, 234 252, 251, 250 12, 11, 10 28, 27, 26
Middle 233, 232, 231 249, 248, 247 9, 8, 7 25, 24, 23
Ring 230, 229, 228 246, 245, 244 6, 5, 4 22, 21, 20
Pinky 227, 226, 225 243, 242, 241 3, 2, 1 19, 18, 17

Bridge sensors (5 per hand): One sensor between each finger and the palm.

Thumb Index Middle Ring Pinky
Left hand index 221 218 215 212 209
Right hand index 46 43 40 37 34

Palm grid (72 taxels per hand): Arranged in 5 rows from fingers to heel of palm.

Left hand palm (all from indices 128–255):

Row Count Index Range
Row 1 (near fingers) 12 206 → 195
Row 2 15 190 → 176
Row 3 15 174 → 160
Row 4 15 158 → 144
Row 5 (heel of palm) 15 142 → 128

Right hand palm (all from indices 0–127):

Row Count Index Range
Row 1 (near fingers) 12 60 → 49
Row 2 15 79 → 65
Row 3 15 95 → 81
Row 4 15 111 → 97
Row 5 (heel of palm) 15 127 → 113

Extracting Specific Regions

import numpy as np

def extract_finger_taxels(tactile_256: np.ndarray, hand: str = "left") -> dict:
    """Extract per-finger taxels from the raw 256-element array.

    Args:
        tactile_256: Shape (256,) array of pressure values.
        hand: "left" or "right".

    Returns:
        Dict mapping finger name to (12,) array [4 phalanges x 3 taxels].
    """
    if hand == "left":
        mapping = {
            "thumb":  [30,29,28, 14,13,12, 254,253,252, 238,237,236],
            "index":  [27,26,25, 11,10,9,  251,250,249, 235,234,233],
            "middle": [24,23,22, 8,7,6,    248,247,246, 232,231,230],
            "ring":   [21,20,19, 5,4,3,    245,244,243, 229,228,227],
            "pinky":  [18,17,16, 2,1,0,    242,241,240, 226,225,224],
        }
    else:
        mapping = {
            "thumb":  [239,238,237, 255,254,253, 15,14,13,  31,30,29],
            "index":  [236,235,234, 252,251,250, 12,11,10,  28,27,26],
            "middle": [233,232,231, 249,248,247, 9,8,7,     25,24,23],
            "ring":   [230,229,228, 246,245,244, 6,5,4,     22,21,20],
            "pinky":  [227,226,225, 243,242,241, 3,2,1,     19,18,17],
        }
    return {name: tactile_256[idx] for name, idx in mapping.items()}

3. Body IMUs (8 streams)

IMU sensors are distributed across the upper body, providing acceleration and angular velocity data.

Feature Shape Channels Placement
observation.imu.head (9,) accel(3) + gyro(3) + mag(3) On the camera, ~2 inches in front of the forehead
observation.imu.chest (6,) accel(3) + gyro(3) Center of the sternum
observation.imu.left_bicep (6,) accel(3) + gyro(3) Outer surface of the left upper arm, midway between shoulder and elbow
observation.imu.right_bicep (6,) accel(3) + gyro(3) Outer surface of the right upper arm, midway between shoulder and elbow
observation.imu.left_forearm (6,) accel(3) + gyro(3) Outer surface of the left forearm, midway between elbow and wrist
observation.imu.right_forearm (6,) accel(3) + gyro(3) Outer surface of the right forearm, midway between elbow and wrist
observation.imu.left_hand (4,) quaternion(4) Back of the left hand (from glove)
observation.imu.right_hand (4,) quaternion(4) Back of the right hand (from glove)

For the 6-axis IMUs, the channel layout is [accel_x, accel_y, accel_z, gyro_x, gyro_y, gyro_z]. The head IMU includes 3 additional magnetometer channels. The hand IMUs provide orientation quaternions [qx, qy, qz, qw].

All IMU data has been resampled to 30 fps to align with video frames using sample-and-hold interpolation from the original variable-rate sensor streams.

4. Temporal Alignment

All sensor streams are synchronized to the video frame clock at 30 fps. Cross-modal alignment error is less than 33 ms (less than 1 frame). Variable-rate sensors (tactile gloves, BLE IMUs) are resampled to 30 fps using sample-and-hold: each video frame carries the most recent sensor reading available at that timestamp.

5. Hand Motion Capture from Tactile Data

The tactile data can be used to derive per-finger bend (curl) values, providing a form of hand motion capture without an optical tracking system. The method sums the pressure across the fingertip phalanges for each finger to estimate how curled it is:

import numpy as np

def compute_finger_bend(tactile_256: np.ndarray, hand: str = "left") -> np.ndarray:
    """Compute a bend value (0 = open, higher = curled) for each finger.

    Sums 6 taxels across the first 3 phalanges of each finger.
    Returns shape (5,) array: [thumb, index, middle, ring, pinky].
    """
    if hand == "left":
        finger_indices = {
            "thumb":  [30, 29, 14, 13, 254, 253],
            "index":  [27, 26, 11, 10, 251, 250],
            "middle": [24, 23,  8,  7, 248, 247],
            "ring":   [21, 20,  5,  4, 245, 244],
            "pinky":  [18, 17,  2,  1, 242, 241],
        }
    else:
        finger_indices = {
            "thumb":  [239, 238, 255, 254, 15, 14],
            "index":  [236, 235, 252, 251, 12, 11],
            "middle": [233, 232, 249, 248,  9,  8],
            "ring":   [230, 229, 246, 245,  6,  5],
            "pinky":  [227, 226, 243, 242,  3,  2],
        }
    return np.array([tactile_256[idx].sum() for idx in finger_indices.values()])


def normalize_finger_bend(
    bend_raw: np.ndarray,
    open_hand: np.ndarray,
    closed_fist: np.ndarray,
) -> np.ndarray:
    """Normalize raw bend values to 0.0 (open) – 1.0 (fully curled).

    Requires calibration frames: one with hand open, one with a closed fist.
    """
    range_ = closed_fist - open_hand
    range_[range_ == 0] = 1  # avoid division by zero
    normalized = (bend_raw - open_hand) / range_
    return np.clip(normalized, 0.0, 1.0)

To calibrate, capture one frame with the hand fully open and one with a closed fist. The normalized value for each finger then maps directly to joint angle: 0.0 corresponds to fully extended and 1.0 to fully curled. All joints in a finger chain (MCP, PIP, DIP) can use the same normalized value, applying a rotation of -pi/2 * normalized per joint for a simple kinematic model.

Combined with the hand IMU quaternion (for wrist orientation) and the per-finger bend values (for finger curl), this provides 6-DOF wrist pose plus 5-DOF finger articulation per hand.


Loading the Dataset

Prerequisites

pip install huggingface_hub pandas pyarrow

Download

huggingface-cli login --token YOUR_TOKEN
huggingface-cli download humanarchive/HA-Multi-Samples \
    --repo-type dataset \
    --local-dir ~/HA-Multi-Samples

Loading in Python

import pandas as pd
import json
from pathlib import Path

DATASET_DIR = Path("~/HA-Multi-Samples").expanduser()

# Load metadata
with open(DATASET_DIR / "meta" / "info.json") as f:
    info = json.load(f)

with open(DATASET_DIR / "meta" / "stats.json") as f:
    stats = json.load(f)

tasks = pd.read_parquet(DATASET_DIR / "meta" / "tasks.parquet")
episodes = pd.read_parquet(DATASET_DIR / "meta" / "episodes" / "chunk-000" / "file-000.parquet")

# Load all sensor data
data = pd.read_parquet(DATASET_DIR / "data" / "chunk-000" / "file-000.parquet")

print(f"Frames: {len(data):,}")
print(f"Episodes: {len(episodes)}")
print(f"Tasks: {list(tasks['task'])}")
print(f"Columns: {list(data.columns)}")

Accessing a Single Episode

Sensor columns are stored as nested arrays (each cell contains a numpy array), not as flattened individual columns.

import numpy as np

episode_id = 0
ep = data[data["episode_index"] == episode_id].reset_index(drop=True)

# Timestamp in seconds
timestamps = ep["timestamp"].values

# Tactile data — shape (N, 256)
left_tactile = np.stack(ep["observation.tactile.left"].values)     # (N, 256)
right_tactile = np.stack(ep["observation.tactile.right"].values)   # (N, 256)

# Hand IMU — shape (N, 12)
left_hand_imu = np.stack(ep["observation.tactile.left_glove_imu"].values)    # (N, 12)
right_hand_imu = np.stack(ep["observation.tactile.right_glove_imu"].values)  # (N, 12)

# Body IMUs
chest_imu = np.stack(ep["observation.imu.chest"].values)           # (N, 6)
head_imu = np.stack(ep["observation.imu.head"].values)             # (N, 9)
left_bicep = np.stack(ep["observation.imu.left_bicep"].values)     # (N, 6)
right_bicep = np.stack(ep["observation.imu.right_bicep"].values)   # (N, 6)

# Video path for this episode
video_path = DATASET_DIR / "videos" / "observation.images.egocentric" / "chunk-000" / f"file-{episode_id:03d}.mp4"

Playing Videos

# Play a single camera view for episode 0
open ~/HA-Multi-Samples/videos/observation.images.egocentric/chunk-000/file-000.mp4

# Play with ffplay (if ffmpeg installed)
ffplay ~/HA-Multi-Samples/videos/observation.images.chest/chunk-000/file-005.mp4

Per-Episode Reference

Episode Task Environment Frames Duration
0 Cooking Kitchen 26,469 14.7 min
1 Cleaning Living room 17,792 9.9 min
2 Cleaning Living room 38,395 21.3 min
3 Folding and cleaning Bedroom 36,747 20.4 min
4 Placing shoes Hallway 1,855 1.0 min
5 Cleaning Bathroom 9,147 5.1 min
6 Cleaning Office 6,842 3.8 min
7 Cleaning Bedroom 17,165 9.5 min
8 Folding and cleaning Bedroom 9,061 5.0 min
9 Cleaning Bathroom 10,974 6.1 min
10 Cleaning Bedroom 9,378 5.2 min
11 Folding clothes Bedroom 19,023 10.6 min
12 Cleaning Kitchen 17,684 9.8 min
13 Cleaning Living room 1,925 1.1 min
14 Cleaning Bedroom 15,399 8.6 min
15 Folding clothes Bedroom 902 0.5 min
16 Folding clothes Bedroom 1,193 0.7 min
17 Folding clothes Bedroom 3,001 1.7 min
18 Folding clothes Bedroom 675 0.4 min
19 Folding clothes Bedroom 706 0.4 min
20 Cleaning Bedroom 670 0.4 min
21 Cleaning Bathroom 12,838 7.1 min
22 Cleaning Hallway 2,603 1.4 min
23 Cooking Kitchen 36,822 20.5 min
24 Cooking Kitchen 3,772 2.1 min
25 Cooking Kitchen 10,386 5.8 min
26 Cleaning Bedroom 5,622 3.1 min
27 Cleaning Bedroom 10,024 5.6 min
28 Cleaning Bedroom 1,656 0.9 min
29 Cleaning Kitchen 8,522 4.7 min
30 Cleaning Kitchen 6,173 3.4 min
31 Cleaning Bedroom 21,041 11.7 min
32 Ironing Bedroom 1,909 1.1 min
33 Ironing Bedroom 27,703 15.4 min
34 Ironing Bedroom 14,642 8.1 min
35 Ironing Bedroom 11,914 6.6 min

Feature Reference

Complete list of columns in data/chunk-000/file-000.parquet. Scalar columns store one value per row. Array columns store a numpy array per row (access with np.stack(df["column"].values) to get a 2D matrix).

Column Type Shape Description
index int64 scalar Global frame index across entire dataset
episode_index int64 scalar Episode number (0–35)
frame_index int64 scalar Frame number within the episode
timestamp float32 scalar Time in seconds from episode start
task_index int64 scalar Task ID (see tasks.parquet)
observation.tactile.left float32[] (256,) Left hand tactile pressure
observation.tactile.right float32[] (256,) Right hand tactile pressure
observation.tactile.left_glove_imu float32[] (12,) Left hand IMU (quaternion + accel/gyro)
observation.tactile.right_glove_imu float32[] (12,) Right hand IMU (quaternion + accel/gyro)
observation.imu.head float32[] (9,) Head IMU (accel + gyro + mag)
observation.imu.chest float32[] (6,) Chest IMU (accel + gyro)
observation.imu.left_bicep float32[] (6,) Left upper arm IMU
observation.imu.right_bicep float32[] (6,) Right upper arm IMU
observation.imu.left_forearm float32[] (6,) Left forearm IMU
observation.imu.right_forearm float32[] (6,) Right forearm IMU
observation.imu.left_hand float32[] (4,) Left hand quaternion
observation.imu.right_hand float32[] (4,) Right hand quaternion

Format

This dataset uses the LeRobot v3.0 chunked format. Key conventions:

  • Video files are stored separately from sensor data, referenced by episode index
  • Sensor data is stored in Parquet files with one row per frame
  • All modalities are time-aligned at 30 fps
  • Episodes are independent recording segments; frame_index resets to 0 at the start of each episode
  • meta/stats.json contains per-feature min, max, mean, and standard deviation computed across the full dataset
Downloads last month
12