ASA / README.md
Joyi03's picture
Upload README.md
aaff0ba verified

Dataset for ASA

This repo provides the data used in our paper Act, Sense, Act: Learning Non-Markovian Active Perception Strategies from Large-Scale Egocentric Human Data. It consists of a curated combination of public egocentric human datasets and collected robot data, processed into a unified format for training.

For more details, please refer to the paper and project page.

Dataset Overview

Human Data

Source Type Samples Takes Languages Take_Languages
CaptainCook4D frame3_chunk1-100_his10-15_anno_image 1,071,604 257 351 3417
EgoExo4D proprio_frame1_chunk3-100_his30-15_image 421,582 249 2730 3131

Robot Data

Source Task Type Samples Takes Languages Take_Languages Details
Monte02 task1_1 frame1_chunk3-100_his30-15_extend90_gripper_image 493,678 191 3 573 vision/current_image 224, vision/history_image 224, hf.feature
Monte02 task3_1 frame1_chunk3-100_his30-15_extend90_gripper_image_new 383,729 102 3 306 new (no light), new_anno
Monte02 task3_2 frame1_chunk3-100_his30-15_extend90_gripper_image_newnew_aug 300,958 83 3 249 new sub23, new sub1 + new-only-sub1(18w), and img-aug
Monte02 task1_2 frame1_chunk3-100_his30-15_extend90_gripper_image_move 375,143 188 2 376 only subtask 2 and 3, and source = 'Monte02_Move'
Monte02 task1_2 frame1_chunk3-100_his30-15_extend90_gripper_hand_new 275, 699 218 2 218 sub1 old + sub4 new, source='Monte02', 'Monte02_12sub4'
Monte02 task2_1 proprio_frame1_chunk3-100_his30-15_extend90_gripper_newdata_image_new 151,628 69 2 138 new data, big ring

Dataset Structure

Directory Layout

ASA/
├── captaincook4d
│   └── hf_datasets
│       └── proprio_frame3_chunk1-100_his30-15_anno_image
│           ├── by_language.pkl
│           ├── by_take_language.pkl
│           ├── by_take.pkl
│           ├── data-00000-of-00028.arrow
│           ├── ....
│           ├── data-00027-of-00028.arrow
│           ├── dataset_info.json
│           └── state.json
├── egoexo4d
│   └── hf_datasets
│       └── proprio_frame1_chunk3-100_his30-15_image
│           ├── xxx.pkl
│           ├── xxxxx.arrow
│           └── xxx.json
└── monte02
      ├── hf_datasets
      │   ├── task1_1
      │   │   └── proprio_xxx
      │   │       ├── xxx.pkl
      │   │       ├── xxxxx.arrow
      │   │       └──  xxx.json
      │   ├── task1_2
      │   ├── task2_1
      │   ├── task3_1
      │   └── task3_2
      └── raw_data
          ├── task1_1.zip
          │   └── folder
          │       └── sample_xxxx_xxx
          │           ├── annotation.json  
          │           ├── head_video.avi   
          │           ├── robot_data.h5
          │           ├── label_result.txt (optional, not available for all samples)
          │           ├── left_video.avi (optional)
          │           ├── right_video.avi (optional)
          │           └── valid.txt
          ├── task1_2.zip
          ├── task2_1.zip
          ├── task3_1.zip
          └── task3_2.zip
 

Data Fields

CaptainCook4D
Key Type Shape Details
source str - from which dataset
take_name str -
frame_idx int - index of the frame in the filtered take (not continuous) (aligned with pose index)
vision/rgb_image bytes - RGB image of size (504, 896, 3)
vision/current_image Image (hf.feature) - head RGB image of size (224, 224, 3)
vision/history_image list(Image) (hf.feature) - 5 history (5s, t-5 ~ t-1) head RGB image of size (224, 224, 3)
vision/video_frame int - index of the frame in the video
vision/histroy_idx list - index of the histroy in the HF_IMAGE_DATASET , maybe in past subtask
current/complete bool - whether the subtask is complete
annotation/language str -
annotation/start_frame int - start_frame of this keystep
annotation/end_frame int -
annotation/delta_idx int - index change in the filtered keystep
current/head/raw_pose ndarray (4, 4) in the world frame
current/left_hand/raw_pose ndarray (26, 4, 4) 26 joints of the left hand
current/left_hand/mano_params ndarray (15,) not use
current/right_hand/raw_pose ndarray (26, 4, 4)
current/right_hand/mano_params ndarray (15,)
current/head/pose_in_base ndarray (9,) in the base frame
current/left_hand/pose_in_base ndarray (26, 9) all 26 joints
current/left_hand/wrist_in_base ndarray (9,) only wrist
current/left_hand/gripper ndarray (1,)
current/right_hand/pose_in_base ndarray (26, 9) all 26 joints
current/right_hand/wrist_in_base ndarray (9,)
current/right_hand/gripper ndarray (1,) normalized gripper state
current/head/move bool - whether the component is moving in current subtask
current/left_hand/move bool -
current/right_hand/move bool -
history/complete ndarray (100,) future chunk 100
history/head/move ndarray (100,)
future/head/pose_in_base ndarray (100, 9)
future/left_hand/move ndarray (100,)
future/left_hand/wrist_in_base ndarray (100, 9)
future/left_hand/gripper ndarray (100,1)
future/right_hand/move ndarray (100,)
future/right_hand/wrist_in_base ndarray (100, 9)
future/right_hand/gripper ndarray (100,1)
history/complete list - history chunk 15, only in this subtask
history/head/move list -
history/head/pose_in_base list -
history/left_hand/move list -
history/left_hand/wrist_in_base list -
history/left_hand/gripper list -
history/right_hand/move list -
history/right_hand/wrist_in_base list -
history/right_hand/gripper list -
EgoExo4D
Key Type Shape Details
source str - from which dataset
take_name str -
frame_idx int - index of the frame in the filtered take (not continuous)
vision/rgb_image bytes - RGB image of size (1408, 1408, 3)
vision/current_image Image (hf.feature) - head RGB image of size (224, 224, 3)
vision/history_image list(Image) (hf.feature) - 5 history (5s, t-5 ~ t-1) head RGB image of size (224, 224, 3)
vision/video_frame int - index of the frame in the video
vision/histroy_idx list - index of the histroy in the HF_IMAGE_DATASET
annotation/language str - coarse_grained or fine_grained
annotation/start_frame int - start_frame of this keystep
annotation/end_frame int -
annotation/delta_idx int - index change in the filtered keystep
current/head/raw_pose ndarray (4, 4) in the world frame
current/left_hand/raw_position ndarray (26, 3) 26 joints of the left hand
current/left_hand/mano_params ndarray (15,)
current/left_hand/wrist_pose ndarray (4,4) wrist pose of left hand, rotation is optimized by MANO
current/right_hand/raw_position ndarray (26, 3)
current/right_hand/mano_params ndarray (15,)
current/right_hand/wrist_pose ndarray (4,4)
current/head/pose_in_base ndarray (9,) in the base frame
current/left_hand/wrist_in_base ndarray (9,) only wrist
current/left_hand/gripper ndarray (1,) gripper width
current/right_hand/wrist_in_base ndarray (9,)
current/right_hand/gripper ndarray (1,)
current/head/move bool - whether the component is moving in current subtask
current/left_hand/move bool -
current/right_hand/move bool -
history/complete ndarray (100,) future chunk 100
history/head/move ndarray (100,)
future/head/pose_in_base ndarray (100, 9)
future/left_hand/move ndarray (100,)
future/left_hand/wrist_in_base ndarray (100, 9)
future/left_hand/gripper ndarray (100,1)
future/right_hand/move ndarray (100,)
future/right_hand/wrist_in_base ndarray (100, 9)
future/right_hand/gripper ndarray (100,1)
history/complete list - history chunk 15
history/head/move list -
history/head/pose_in_base list -
history/left_hand/move list -
history/left_hand/wrist_in_base list -
history/left_hand/gripper list -
history/right_hand/move list -
history/right_hand/wrist_in_base list -
history/right_hand/gripper list -
Monte02
Key Type Shape Details
source str -
take_name str - sample_...
frame_idx int -
vision/video_frame int -
vision/rgb_image bytes - head RGB image of size (640, 480, 3)
vision/current_image Image (hf.feature) - head RGB image of size (224, 224, 3)
vision/history_image list(Image) (hf.feature) - 5 history (5s, t-5 ~ t-1) head RGB image of size (224, 224, 3)
vision/history_idx list - [t-15 ~ t]
annotation/task str - task language
annotation/language str - subtask language
annotation/start_frame int -
annotation/end_frame int -
annotation/delta_idx int -
current/complete bool - whether the subtask is complete
current/left_hand/gripper ndarray (1,) 0 or 1 (? 0.065)
current/right_hand/gripper ndarray (1,) 0 or 1 (? 0.065)
current/left_hand/gripper_width ndarray (1,) 0~0.01
current/right_hand/gripper_width ndarray (1,) 0~0.01
current/head/angles ndarray (2,) pitch, yaw
current/chassis/pose_in_init ndarray (7,) xyz + wxyz
current/head/pose_in_base ndarray (9,) xyz + rot6d, base = init_head
current/head/pose_in_step_base ndarray (9,) xyz + rot6d, step_base = current init_head
current/left_hand/wrist_in_base ndarray (9,)
current/right_hand/wrist_in_base ndarray (9,)
current/left_hand/wrist_in_step_base ndarray (9,)
current/right_hand/wrist_in_step_base ndarray (9,)
current/head/move bool - whether the component is moving in current subtask
current/left_hand/move bool -
current/right_hand/move bool -
future/complete ndarray (100,) future actions and states
future/head/move ndarray (100,)
future/head/pose_in_base ndarray (100, 9)
future/head/pose_in_step_base ndarray (100, 9)
future/left_hand/move ndarray (100,)
future/left_hand/wrist_in_base ndarray (100, 9)
future/left_hand/wrist_in_step_base ndarray (100, 9)
future/left_hand/gripper ndarray (100, 1)
future/right_hand/move ndarray (100,)
future/right_hand/wrist_in_base ndarray (100, 9)
future/right_hand/wrist_in_step_base ndarray (100, 9)
future/right_hand/gripper ndarray (100, 1)
history/complete list - history actions and states
history/head/move list -
history/head/pose_in_base list -
history/head/pose_in_step_base list -
history/left_hand/move list -
history/left_hand/wrist_in_base list -
history/left_hand/wrist_in_step_base list -
history/left_hand/gripper list -
history/right_hand/move list -
history/right_hand/wrist_in_base list -
history/right_hand/wrist_in_step_base list -
history/right_hand/gripper list -

Notes

  • We provide preprocessed datasets to ensure consistent quality and reduce preprocessing overhead.
  • Human data is filtered with strict criteria to improve learning stability.
  • Robot data is collected in real-world environments.