Upload README.md
Browse files
README.md
ADDED
|
@@ -0,0 +1,252 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Dataset for ASA
|
| 2 |
+
|
| 3 |
+
This repo provides the data used in our paper ***Act, Sense, Act: Learning Non-Markovian Active Perception Strategies from Large-Scale Egocentric Human Data***. It consists of a curated combination of public egocentric human datasets and collected robot data, processed into a unified format for training.
|
| 4 |
+
|
| 5 |
+
For more details, please refer to the [paper](https://arxiv.org/abs/2602.04600) and [project page](https://jern-li.github.io/asa/).
|
| 6 |
+
|
| 7 |
+
## Dataset Overview
|
| 8 |
+
### Human Data
|
| 9 |
+
| Source | Type | Samples | Takes | Languages | Take_Languages
|
| 10 |
+
|-----------------|-----------------------------------|----------------|----------------|----------------|----------------
|
| 11 |
+
|CaptainCook4D | frame3_chunk1-100_his10-15_anno_image | 1,071,604 | 257 | 351 | 3417
|
| 12 |
+
|EgoExo4D | proprio_frame1_chunk3-100_his30-15_image | 421,582 | 249 | 2730 | 3131
|
| 13 |
+
|
| 14 |
+
|
| 15 |
+
### Robot Data
|
| 16 |
+
|
| 17 |
+
| Source | Task | Type | Samples | Takes | Languages | Take_Languages | Details |
|
| 18 |
+
|-----------------|-----------|------------------------|----------------|----------------|----------------|----------------|------------------------
|
| 19 |
+
|Monte02 | task1_1 | frame1_chunk3-100_his30-15_extend90_gripper_image | 493,678 | 191 | 3 | 573 | vision/current_image 224, vision/history_image 224, hf.feature
|
| 20 |
+
|Monte02 | task3_1 | frame1_chunk3-100_his30-15_extend90_gripper_image_new | 383,729 | 102 | 3 | 306 | new (no light), new_anno
|
| 21 |
+
|Monte02 | task3_2 |frame1_chunk3-100_his30-15_extend90_gripper_image_newnew_aug | 300,958 | 83 | 3 | 249 | new sub23, new sub1 + new-only-sub1(18w), and img-aug
|
| 22 |
+
|Monte02 | task1_2 | frame1_chunk3-100_his30-15_extend90_gripper_image_move | 375,143 | 188 | 2 | 376 | only subtask 2 and 3, and source = 'Monte02_Move'
|
| 23 |
+
|Monte02 | task1_2 | frame1_chunk3-100_his30-15_extend90_gripper_hand_new |275, 699 | 218 | 2 | 218 | sub1 old + sub4 new, source='Monte02', 'Monte02_12sub4'
|
| 24 |
+
|Monte02 | task2_1 | proprio_frame1_chunk3-100_his30-15_extend90_gripper_newdata_image_new | 151,628 | 69 | 2 | 138 | new data, big ring
|
| 25 |
+
|
| 26 |
+
## Dataset Structure
|
| 27 |
+
|
| 28 |
+
### Directory Layout
|
| 29 |
+
```
|
| 30 |
+
ASA/
|
| 31 |
+
├── captaincook4d
|
| 32 |
+
│ └── hf_datasets
|
| 33 |
+
│ └── proprio_frame3_chunk1-100_his30-15_anno_image
|
| 34 |
+
│ ├── by_language.pkl
|
| 35 |
+
│ ├── by_take_language.pkl
|
| 36 |
+
│ ├── by_take.pkl
|
| 37 |
+
│ ├── data-00000-of-00028.arrow
|
| 38 |
+
│ ├── ....
|
| 39 |
+
│ ├── data-00027-of-00028.arrow
|
| 40 |
+
│ ├── dataset_info.json
|
| 41 |
+
│ └── state.json
|
| 42 |
+
├── egoexo4d
|
| 43 |
+
│ └── hf_datasets
|
| 44 |
+
│ └── proprio_frame1_chunk3-100_his30-15_image
|
| 45 |
+
│ ├── xxx.pkl
|
| 46 |
+
│ ├── xxxxx.arrow
|
| 47 |
+
│ └── xxx.json
|
| 48 |
+
└── monte02
|
| 49 |
+
├── hf_datasets
|
| 50 |
+
│ ├── task1_1
|
| 51 |
+
│ │ └── proprio_xxx
|
| 52 |
+
│ │ ├── xxx.pkl
|
| 53 |
+
│ │ ├── xxxxx.arrow
|
| 54 |
+
│ │ └── xxx.json
|
| 55 |
+
│ ├── task1_2
|
| 56 |
+
│ ├── task2_1
|
| 57 |
+
│ ├── task3_1
|
| 58 |
+
│ └── task3_2
|
| 59 |
+
└── raw_data
|
| 60 |
+
├── task1_1.zip
|
| 61 |
+
│ └── folder
|
| 62 |
+
│ └── sample_xxxx_xxx
|
| 63 |
+
│ ├── annotation.json
|
| 64 |
+
│ ├── head_video.avi
|
| 65 |
+
│ ├── robot_data.h5
|
| 66 |
+
│ ├── label_result.txt (optional, not available for all samples)
|
| 67 |
+
│ ├── left_video.avi (optional)
|
| 68 |
+
│ ├── right_video.avi (optional)
|
| 69 |
+
│ └── valid.txt
|
| 70 |
+
├── task1_2.zip
|
| 71 |
+
├── task2_1.zip
|
| 72 |
+
├── task3_1.zip
|
| 73 |
+
└── task3_2.zip
|
| 74 |
+
|
| 75 |
+
```
|
| 76 |
+
|
| 77 |
+
### Data Fields
|
| 78 |
+
|
| 79 |
+
<details>
|
| 80 |
+
<summary> CaptainCook4D </summary>
|
| 81 |
+
|
| 82 |
+
| Key | Type | Shape | Details |
|
| 83 |
+
|--------------------------------------|----------------|----------------|--------------------------------------
|
| 84 |
+
| `source` | `str` | - | from which dataset
|
| 85 |
+
| `take_name` | `str` | - |
|
| 86 |
+
| `frame_idx` | `int` | - | index of the frame in the filtered take (not continuous) (aligned with pose index)
|
| 87 |
+
| `vision/rgb_image` | `bytes` | - |RGB image of size **(504, 896, 3)**
|
| 88 |
+
`vision/current_image` |`Image` (hf.feature) | - |head RGB image of size **(224, 224, 3)**
|
| 89 |
+
`vision/history_image` |`list(Image)` (hf.feature) | - | 5 history (5s, t-5 ~ t-1) head RGB image of size **(224, 224, 3)**
|
| 90 |
+
| `vision/video_frame` | `int` | - |index of the frame in the video
|
| 91 |
+
| `vision/histroy_idx` | `list` | - |index of the histroy in the **HF_IMAGE_DATASET** , maybe in past subtask
|
| 92 |
+
`current/complete ` | `bool` | - | whether the subtask is complete
|
| 93 |
+
| `annotation/language` | `str` | - |
|
| 94 |
+
| `annotation/start_frame` | `int` | - |start_frame of this keystep
|
| 95 |
+
| `annotation/end_frame` | `int` | - |
|
| 96 |
+
| `annotation/delta_idx` | `int` | - | index change in the filtered keystep
|
| 97 |
+
| `current/head/raw_pose` | `ndarray` | (4, 4) | in the world frame
|
| 98 |
+
| `current/left_hand/raw_pose` | `ndarray` | (26, 4, 4) | 26 joints of the left hand
|
| 99 |
+
| `current/left_hand/mano_params` | `ndarray` | (15,) | not use
|
| 100 |
+
| `current/right_hand/raw_pose` | `ndarray` | (26, 4, 4) |
|
| 101 |
+
| `current/right_hand/mano_params` | `ndarray` | (15,) |
|
| 102 |
+
| `current/head/pose_in_base` | `ndarray` | (9,) | in the base frame
|
| 103 |
+
| `current/left_hand/pose_in_base` | `ndarray` | (26, 9) | all 26 joints
|
| 104 |
+
| `current/left_hand/wrist_in_base` | `ndarray` | (9,) | only wrist
|
| 105 |
+
| `current/left_hand/gripper` | `ndarray` | (1,) |
|
| 106 |
+
| `current/right_hand/pose_in_base` | `ndarray` | (26, 9) | all 26 joints
|
| 107 |
+
| `current/right_hand/wrist_in_base` | `ndarray` | (9,) |
|
| 108 |
+
| `current/right_hand/gripper` | `ndarray` | (1,) | normalized gripper state
|
| 109 |
+
| `current/head/move ` | `bool` | - | whether the component is moving in current subtask
|
| 110 |
+
| `current/left_hand/move` | `bool` | - |
|
| 111 |
+
| `current/right_hand/move` | `bool` | - |
|
| 112 |
+
| `history/complete` | `ndarray` | (100,) | future chunk 100
|
| 113 |
+
| `history/head/move` | `ndarray` | (100,) |
|
| 114 |
+
| `future/head/pose_in_base` | `ndarray` | (100, 9) |
|
| 115 |
+
| `future/left_hand/move ` | `ndarray` | (100,) |
|
| 116 |
+
| `future/left_hand/wrist_in_base` | `ndarray` | (100, 9) |
|
| 117 |
+
| `future/left_hand/gripper` | `ndarray` | (100,1) |
|
| 118 |
+
| `future/right_hand/move ` | `ndarray` | (100,) |
|
| 119 |
+
| `future/right_hand/wrist_in_base` | `ndarray` | (100, 9) |
|
| 120 |
+
| `future/right_hand/gripper` | `ndarray` | (100,1) |
|
| 121 |
+
| `history/complete` | `list` | - | history chunk 15, only in this subtask
|
| 122 |
+
| `history/head/move` | `list` | - |
|
| 123 |
+
| `history/head/pose_in_base` | `list` | - |
|
| 124 |
+
| `history/left_hand/move ` | `list` | - |
|
| 125 |
+
| `history/left_hand/wrist_in_base` | `list` | - |
|
| 126 |
+
| `history/left_hand/gripper` | `list` | - |
|
| 127 |
+
| `history/right_hand/move ` | `list` | -
|
| 128 |
+
| `history/right_hand/wrist_in_base` | `list` | - |
|
| 129 |
+
| `history/right_hand/gripper` | `list` | - |
|
| 130 |
+
|
| 131 |
+
</details>
|
| 132 |
+
|
| 133 |
+
|
| 134 |
+
<details>
|
| 135 |
+
<summary> EgoExo4D </summary>
|
| 136 |
+
|
| 137 |
+
| Key | Type | Shape | Details |
|
| 138 |
+
|--------------------------------------|----------------|----------------|--------------------------------------
|
| 139 |
+
| `source` | `str` | - | from which dataset
|
| 140 |
+
| `take_name` | `str` | - | |
|
| 141 |
+
| `frame_idx` | `int` | - | index of the frame in the filtered take (not continuous) |
|
| 142 |
+
| `vision/rgb_image` | `bytes` | - |RGB image of size **(1408, 1408, 3)**
|
| 143 |
+
`vision/current_image` |`Image` (hf.feature) | - |head RGB image of size **(224, 224, 3)**
|
| 144 |
+
`vision/history_image` |`list(Image)` (hf.feature) | - | 5 history (5s, t-5 ~ t-1) head RGB image of size **(224, 224, 3)**
|
| 145 |
+
| `vision/video_frame` | `int` | - |index of the frame in the video
|
| 146 |
+
| `vision/histroy_idx` | `list` | - |index of the histroy in the **HF_IMAGE_DATASET** |
|
| 147 |
+
| `annotation/language` | `str` | - | coarse_grained or fine_grained |
|
| 148 |
+
| `annotation/start_frame` | `int` | - |start_frame of this keystep |
|
| 149 |
+
| `annotation/end_frame` | `int` | - |
|
| 150 |
+
| `annotation/delta_idx` | `int` | - | index change in the filtered keystep |
|
| 151 |
+
| `current/head/raw_pose` | `ndarray` | (4, 4) | in the world frame |
|
| 152 |
+
| `current/left_hand/raw_position` | `ndarray` | (26, 3) | 26 joints of the left hand
|
| 153 |
+
| `current/left_hand/mano_params` | `ndarray` | (15,) |
|
| 154 |
+
| `current/left_hand/wrist_pose` | `ndarray` | (4,4) |wrist pose of left hand, rotation is optimized by MANO
|
| 155 |
+
| `current/right_hand/raw_position` | `ndarray` | (26, 3) |
|
| 156 |
+
| `current/right_hand/mano_params` | `ndarray` | (15,) |
|
| 157 |
+
| `current/right_hand/wrist_pose` | `ndarray` | (4,4) |
|
| 158 |
+
| `current/head/pose_in_base` | `ndarray` | (9,) | in the base frame|
|
| 159 |
+
| `current/left_hand/wrist_in_base` | `ndarray` | (9,) | only wrist
|
| 160 |
+
| `current/left_hand/gripper` | `ndarray` | (1,) | gripper width
|
| 161 |
+
| `current/right_hand/wrist_in_base` | `ndarray` | (9,) |
|
| 162 |
+
| `current/right_hand/gripper` | `ndarray` | (1,) |
|
| 163 |
+
| `current/head/move ` | `bool` | - | whether the component is moving in current subtask
|
| 164 |
+
| `current/left_hand/move` | `bool` | - |
|
| 165 |
+
| `current/right_hand/move` | `bool` | - |
|
| 166 |
+
| `history/complete` | `ndarray` | (100,) | future chunk 100
|
| 167 |
+
| `history/head/move` | `ndarray` | (100,) |
|
| 168 |
+
| `future/head/pose_in_base` | `ndarray` | (100, 9) |
|
| 169 |
+
| `future/left_hand/move ` | `ndarray` | (100,) |
|
| 170 |
+
| `future/left_hand/wrist_in_base` | `ndarray` | (100, 9) |
|
| 171 |
+
| `future/left_hand/gripper` | `ndarray` | (100,1) |
|
| 172 |
+
| `future/right_hand/move ` | `ndarray` | (100,) |
|
| 173 |
+
| `future/right_hand/wrist_in_base` | `ndarray` | (100, 9) |
|
| 174 |
+
| `future/right_hand/gripper` | `ndarray` | (100,1) |
|
| 175 |
+
| `history/complete` | `list` | - | history chunk 15
|
| 176 |
+
| `history/head/move` | `list` | - |
|
| 177 |
+
| `history/head/pose_in_base` | `list` | - |
|
| 178 |
+
| `history/left_hand/move ` | `list` | - |
|
| 179 |
+
| `history/left_hand/wrist_in_base` | `list` | - |
|
| 180 |
+
| `history/left_hand/gripper` | `list` | - |
|
| 181 |
+
| `history/right_hand/move ` | `list` |-
|
| 182 |
+
| `history/right_hand/wrist_in_base` | `list` | - |
|
| 183 |
+
| `history/right_hand/gripper` | `list` | - |
|
| 184 |
+
|
| 185 |
+
</details>
|
| 186 |
+
|
| 187 |
+
<details>
|
| 188 |
+
<summary> Monte02 </summary>
|
| 189 |
+
|
| 190 |
+
| Key | Type | Shape | Details
|
| 191 |
+
|--------------------------------------|----------------|----------------|-------------------------
|
| 192 |
+
`source` | `str` | - |
|
| 193 |
+
`take_name` | `str` | - | sample_...
|
| 194 |
+
`frame_idx` |`int` | - |
|
| 195 |
+
`vision/video_frame` | `int` | - |
|
| 196 |
+
`vision/rgb_image` | `bytes` | - |head RGB image of size **(640, 480, 3)**
|
| 197 |
+
`vision/current_image` |`Image` (hf.feature) | - |head RGB image of size **(224, 224, 3)**
|
| 198 |
+
`vision/history_image` |`list(Image)` (hf.feature) | - | 5 history (5s, t-5 ~ t-1) head RGB image of size **(224, 224, 3)**
|
| 199 |
+
`vision/history_idx ` | `list` | - | [t-15 ~ t]
|
| 200 |
+
`annotation/task` | `str` | - | task language
|
| 201 |
+
`annotation/language` | `str` | - | subtask language
|
| 202 |
+
`annotation/start_frame` | `int` | - |
|
| 203 |
+
`annotation/end_frame` | `int` | - |
|
| 204 |
+
`annotation/delta_idx` | `int` | - |
|
| 205 |
+
`current/complete ` | `bool` | - | whether the subtask is complete
|
| 206 |
+
`current/left_hand/gripper ` | `ndarray` | (1,) | 0 or 1 (? 0.065)
|
| 207 |
+
`current/right_hand/gripper` | `ndarray` | (1,) | 0 or 1 (? 0.065)
|
| 208 |
+
`current/left_hand/gripper_width ` | `ndarray` | (1,) | 0~0.01
|
| 209 |
+
`current/right_hand/gripper_width` | `ndarray` | (1,) | 0~0.01
|
| 210 |
+
`current/head/angles` | `ndarray` | (2,) | pitch, yaw
|
| 211 |
+
`current/chassis/pose_in_init` | `ndarray` | (7,) | xyz + wxyz
|
| 212 |
+
`current/head/pose_in_base` | `ndarray` | (9,) | xyz + rot6d, base = init_head
|
| 213 |
+
`current/head/pose_in_step_base` | `ndarray` | (9,) | xyz + rot6d, step_base = current init_head
|
| 214 |
+
`current/left_hand/wrist_in_base` | `ndarray` | (9,)
|
| 215 |
+
`current/right_hand/wrist_in_base ` | `ndarray` | (9,)
|
| 216 |
+
`current/left_hand/wrist_in_step_base` | `ndarray` | (9,)
|
| 217 |
+
`current/right_hand/wrist_in_step_base` | `ndarray` | (9,)
|
| 218 |
+
`current/head/move ` | `bool` | - | whether the component is moving in current subtask
|
| 219 |
+
`current/left_hand/move` | `bool` | - |
|
| 220 |
+
`current/right_hand/move` | `bool` | - |
|
| 221 |
+
`future/complete` | `ndarray` | (100,) |future actions and states
|
| 222 |
+
`future/head/move` | `ndarray` | (100,) |
|
| 223 |
+
`future/head/pose_in_base` | `ndarray` | (100, 9)
|
| 224 |
+
`future/head/pose_in_step_base ` | `ndarray` | (100, 9)
|
| 225 |
+
`future/left_hand/move` | `ndarray` | (100,)
|
| 226 |
+
`future/left_hand/wrist_in_base` | `ndarray` | (100, 9)
|
| 227 |
+
`future/left_hand/wrist_in_step_base` | `ndarray` | (100, 9)
|
| 228 |
+
`future/left_hand/gripper ` | `ndarray` | (100, 1)
|
| 229 |
+
`future/right_hand/move ` | `ndarray` | (100,)
|
| 230 |
+
`future/right_hand/wrist_in_base ` | `ndarray` | (100, 9)
|
| 231 |
+
`future/right_hand/wrist_in_step_base ` | `ndarray` | (100, 9)
|
| 232 |
+
`future/right_hand/gripper` | `ndarray` | (100, 1)
|
| 233 |
+
`history/complete` | `list` | - | history actions and states
|
| 234 |
+
`history/head/move` | `list` | - |
|
| 235 |
+
`history/head/pose_in_base` | `list` | - |
|
| 236 |
+
`history/head/pose_in_step_base ` | `list` | -|
|
| 237 |
+
`history/left_hand/move` | `list` | -|
|
| 238 |
+
`history/left_hand/wrist_in_base` | `list` | -|
|
| 239 |
+
`history/left_hand/wrist_in_step_base` | `list` | -|
|
| 240 |
+
`history/left_hand/gripper ` | `list` | - |
|
| 241 |
+
`history/right_hand/move ` | `list` | - |
|
| 242 |
+
`history/right_hand/wrist_in_base ` | `list` | -|
|
| 243 |
+
`history/right_hand/wrist_in_step_base ` | `list` | -|
|
| 244 |
+
`history/right_hand/gripper` | `list` | -|
|
| 245 |
+
</details>
|
| 246 |
+
|
| 247 |
+
|
| 248 |
+
## Notes
|
| 249 |
+
|
| 250 |
+
- We provide preprocessed datasets to ensure consistent quality and reduce preprocessing overhead.
|
| 251 |
+
- Human data is filtered with strict criteria to improve learning stability.
|
| 252 |
+
- Robot data is collected in real-world environments.
|