Dataset Viewer (First 5GB)
Auto-converted to Parquet Duplicate
Search is not available for this dataset
image
imagewidth (px)
377
3.84k
End of preview. Expand in Data Studio

EgoDorsal

Data repository for the EgoDorsal dataset described in DeltaDorsal: Enhancing Hand Pose Estimation with Dorsal Features in Egocentric Views published at ACM CHI 2026

[Github] [arXiv]

Overview

This is a small sample of the EgoDorsalPose dataset described in DeltaDorsal. More data may be uploaded as time passes.

Data Preparation

We organize our data as follows according to the following setup which can interface with our prewritten dataset modules found in src/datasets/. If you want to use your own data, please feel free to write your own modules.

.                                         # ROOT
β”œβ”€β”€ bases.json                            # Bases metadata
β”œβ”€β”€ trials.json                           # Metadata of each trial
β”œβ”€β”€ frames.json                           # Metadata of all captured frames
β”œβ”€β”€ train.json                            # (OPTIONAL) subset of frames.json for train split
β”œβ”€β”€ val.json                              # (OPTIONAL) subset of frames.json for val split
β”œβ”€β”€ test.json                             # (OPTIONAL) subset of frames.json for test
β”œβ”€β”€ trials/                               # all captured data
β”‚   β”œβ”€β”€ PARTICIPANT_XXX/
β”‚   β”‚   β”œβ”€β”€ TRIAL_XXX/
β”‚   β”‚   β”‚   β”œβ”€β”€ anns/
β”‚   β”‚   β”‚   β”‚   β”œβ”€β”€ frame_XXX.npy
β”‚   β”‚   β”‚   β”‚   β”œβ”€β”€ frame_XXX.npy
β”‚   β”‚   β”‚   β”‚   └── frame_XXX.npy
β”‚   β”‚   β”‚   β”œβ”€β”€ hamer/                    # initial pose prediction
β”‚   β”‚   β”‚   β”‚   β”œβ”€β”€ frame_XXX.npy
β”‚   β”‚   β”‚   β”‚   β”œβ”€β”€ frame_XXX.npy
β”‚   β”‚   β”‚   β”‚   └── frame_XXX.npy
β”‚   β”‚   β”‚   β”œβ”€β”€ imgs/                     # captured images
β”‚   β”‚   β”‚   β”‚   β”œβ”€β”€ frame_XXX.jpg
β”‚   β”‚   β”‚   β”‚   β”œβ”€β”€ frame_XXX.jpg
β”‚   β”‚   β”‚   β”‚   └── frame_XXX.jpg
β”‚   β”‚   β”‚   β”œβ”€β”€ cropped_images/           # (OPTIONAL) Precropped images that are aligned to bases
β”‚   β”‚   β”‚   β”‚   β”œβ”€β”€ frame_XXX.jpg
β”‚   β”‚   β”‚   β”‚   β”œβ”€β”€ frame_XXX.jpg
β”‚   β”‚   β”‚   β”‚   └── frame_XXX.jpg
β”‚   β”‚   β”‚   └── cropped_bases/            # (OPTIONAL) Precropped bases that are aligned to each frame
β”‚   β”‚   β”‚       β”œβ”€β”€ frame_XXX.jpg
β”‚   β”‚   β”‚       β”œβ”€β”€ frame_XXX.jpg
β”‚   β”‚   β”‚       └── frame_XXX.jpg
β”‚   β”‚   └── ...
β”‚   └── ...
└── bases/                               # all captured reference images
    β”œβ”€β”€ PARTICIPANT_XXX/
    β”‚   β”œβ”€β”€ hamer/                       # initial pose prediction
    β”‚   β”‚   β”œβ”€β”€ frame_XXX.npy
    β”‚   β”‚   β”œβ”€β”€ frame_XXX.npy
    β”‚   β”‚   └── frame_XXX.npy
    β”‚   └── imgs/
    β”‚       β”œβ”€β”€ frame_XXX.jpg
    β”‚       β”œβ”€β”€ frame_XXX.jpg
    β”‚       └── frame_XXX.jpg
    └── ...

Each frame should have:

  • Image file
  • HaMeR pose predictions or some initial 2d pose prediction for alignment
  • Ground truth pose annotations
  • Force sensor readings for force estimation (optional)

Data Schemas

base.json

  • bases_dir (str) - path to bases dir (default to "bases")
  • participants (array)
    • item (object)
      • p_id (int) - participant id
      • bases (array)
        • item (object)
          • base_id (int) - base id
          • img_path (str) - relative path from bases_dir to the base image .jpg
          • hamer_path (str) - relative path from bases_dir to the initial annotations .npy

trials.json

  • trials (array)
    • item (object)
      • trial_id (int) - id of this trial
      • p_id (int) - participant id for this trial
      • motion_type (str) - type of gesture
      • hand_position (str) - orientation of hand
      • K (array(float)) - 3x3 camera intrinsic matrix (Represented as A here)
      • d (array(float)) - 1x15 camera intrinsic distortion (output from OpenCV)
      • world2cam (array(float)) - 4x4 camera extrinsic matrix.

frames.json, train.json, val.json, test.json

  • frames_dir (str) - path to frames dir (default to "trials")
  • split (str) - training split. one of full, train, val, test
  • frames (array)
    • item (object)
      • trial_id (int) - id of corresponding trial
      • timestamp (float) - timestamp within the video capture
      • frame_no (int) - index of frame in the video
      • img_path (str) - relative path from frames_dir to the captured image .jpg
      • cropped_img_path (str) - (OPTIONAL) relative path from frames_dir to a precropped and aligned image .jpg
      • cropped_base_path (str) - (OPTIONAL) relative path from frames_dir to a precropped and aligned reference image .jpg
      • cropped_base_idx (int) - index of base image taken and prealigned for this frame
      • ann_path (str) - relative path from frames_dir to the ground truth annotation .npy
      • hamer_path (str) - relative path from frames_dir to the initial annotations .npy
      • fsr_reading (float) - (OPTIONAL) fsr reading for this frame for force predictions
      • tap_label (float) - (OPTIONAL) assigned label for force action type
      • frame_id (int) - id for this frame data

annotation.npy (all annotation files)

  • betas (array) - 1x10 shape parameters for MANO
  • global_orient (array) - 1x3 global orientation for MANO (commonly represented as the first three terms of pose parameters)
  • hand_pose (array) - 1x15x3 pose parameters in axis-angle representation for MANO. (Can be 1x15x3x3 if in rotation matrix form)
  • cam_t (array) - 1x3 camera projection parameters to convert from 3D to 2D camera frame annotations
  • keypoints_3d (array) - 21x3 openpose keypoint locations in xyz format
  • keypoints_2d (array) - 21x2 openpose keypoint locations projected into camera frame

Citing

If extending or using our work, please cite the following papers:

@misc{huangDeltaDorsalEnhancingHand2026,
  title = {{{DeltaDorsal}}: {{Enhancing Hand Pose Estimation}} with {{Dorsal Features}} in {{Egocentric Views}}},
  shorttitle = {{{DeltaDorsal}}},
  author = {Huang, William and Pei, Siyou and Zou, Leyi and Gonzalez, Eric J. and Chatterjee, Ishan and Zhang, Yang},
  year = 2026,
  month = jan,
  number = {arXiv:2601.15516},
  eprint = {2601.15516},
  primaryclass = {cs},
  publisher = {arXiv},
  doi = {10.48550/arXiv.2601.15516},
  urldate = {2026-02-08},
  archiveprefix = {arXiv}
}
Downloads last month
1,660

Paper for whuang37/egodorsal