Datasets:
Improve dataset card: Add paper, project page, code, citation, tags, and usage instructions
#2
by nielsr HF Staff - opened
README.md
CHANGED
|
@@ -4,6 +4,11 @@ task_categories:
|
|
| 4 |
- robotics
|
| 5 |
tags:
|
| 6 |
- LeRobot
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 7 |
configs:
|
| 8 |
- config_name: default
|
| 9 |
data_files: data/*/*.parquet
|
|
@@ -13,10 +18,11 @@ This dataset was created using [LeRobot](https://github.com/huggingface/lerobot)
|
|
| 13 |
|
| 14 |
## Dataset Description
|
| 15 |
|
|
|
|
| 16 |
|
| 17 |
-
|
| 18 |
-
- **
|
| 19 |
-
- **
|
| 20 |
- **License:** apache-2.0
|
| 21 |
|
| 22 |
## Dataset Structure
|
|
@@ -108,76 +114,76 @@ This dataset was created using [LeRobot](https://github.com/huggingface/lerobot)
|
|
| 108 |
"has_audio": false
|
| 109 |
}
|
| 110 |
},
|
| 111 |
-
"observation.images.wrist_cam_right": {
|
| 112 |
-
"dtype": "video",
|
| 113 |
-
"shape": [
|
| 114 |
480,
|
| 115 |
640,
|
| 116 |
3
|
| 117 |
],
|
| 118 |
-
"names": [
|
| 119 |
-
"height",
|
| 120 |
-
"width",
|
| 121 |
-
"channel"
|
| 122 |
],
|
| 123 |
-
"info": {
|
| 124 |
-
"video.height": 480,
|
| 125 |
-
"video.width": 640,
|
| 126 |
-
"video.codec": "av1",
|
| 127 |
-
"video.pix_fmt": "yuv420p",
|
| 128 |
-
"video.is_depth_map": false,
|
| 129 |
-
"video.fps": 25,
|
| 130 |
-
"video.channels": 3,
|
| 131 |
-
"has_audio": false
|
| 132 |
}
|
| 133 |
},
|
| 134 |
-
"observation.images.overhead_cam": {
|
| 135 |
-
"dtype": "video",
|
| 136 |
-
"shape": [
|
| 137 |
480,
|
| 138 |
640,
|
| 139 |
3
|
| 140 |
],
|
| 141 |
-
"names": [
|
| 142 |
-
"height",
|
| 143 |
-
"width",
|
| 144 |
-
"channel"
|
| 145 |
],
|
| 146 |
-
"info": {
|
| 147 |
-
"video.height": 480,
|
| 148 |
-
"video.width": 640,
|
| 149 |
-
"video.codec": "av1",
|
| 150 |
-
"video.pix_fmt": "yuv420p",
|
| 151 |
-
"video.is_depth_map": false,
|
| 152 |
-
"video.fps": 25,
|
| 153 |
-
"video.channels": 3,
|
| 154 |
-
"has_audio": false
|
| 155 |
}
|
| 156 |
},
|
| 157 |
-
"observation.images.worms_eye_cam": {
|
| 158 |
-
"dtype": "video",
|
| 159 |
-
"shape": [
|
| 160 |
480,
|
| 161 |
640,
|
| 162 |
3
|
| 163 |
],
|
| 164 |
-
"names": [
|
| 165 |
-
"height",
|
| 166 |
-
"width",
|
| 167 |
-
"channel"
|
| 168 |
],
|
| 169 |
-
"info": {
|
| 170 |
-
"video.height": 480,
|
| 171 |
-
"video.width": 640,
|
| 172 |
-
"video.codec": "av1",
|
| 173 |
-
"video.pix_fmt": "yuv420p",
|
| 174 |
-
"video.is_depth_map": false,
|
| 175 |
-
"video.fps": 25,
|
| 176 |
-
"video.channels": 3,
|
| 177 |
-
"has_audio": false
|
| 178 |
}
|
| 179 |
},
|
| 180 |
-
"observation.state": {
|
| 181 |
"dtype": "float32",
|
| 182 |
"shape": [
|
| 183 |
21
|
|
@@ -272,11 +278,50 @@ This dataset was created using [LeRobot](https://github.com/huggingface/lerobot)
|
|
| 272 |
}
|
| 273 |
```
|
| 274 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 275 |
|
| 276 |
## Citation
|
| 277 |
|
| 278 |
**BibTeX:**
|
| 279 |
|
| 280 |
```bibtex
|
| 281 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 282 |
```
|
|
|
|
| 4 |
- robotics
|
| 5 |
tags:
|
| 6 |
- LeRobot
|
| 7 |
+
- gaze
|
| 8 |
+
- foveated-vision
|
| 9 |
+
- robot-learning
|
| 10 |
+
- simulation
|
| 11 |
+
library_name: lerobot
|
| 12 |
configs:
|
| 13 |
- config_name: default
|
| 14 |
data_files: data/*/*.parquet
|
|
|
|
| 18 |
|
| 19 |
## Dataset Description
|
| 20 |
|
| 21 |
+
This dataset, presented in the paper [Look, Focus, Act: Efficient and Robust Robot Learning via Human Gaze and Foveated Vision Transformers](https://huggingface.co/papers/2507.15833), provides a simulation benchmark and dataset for training robot policies that incorporate human gaze. It includes bimanual robot demonstrations with synchronized human eye-tracking data collected using the AV-ALOHA simulation platform for the peg insertion task. This dataset is part of a larger effort to explore how human-like active gaze can enhance robot learning efficiency and robustness.
|
| 22 |
|
| 23 |
+
- **Homepage:** [https://ian-chuang.github.io/gaze-av-aloha/](https://ian-chuang.github.io/gaze-av-aloha/)
|
| 24 |
+
- **Paper:** [https://huggingface.co/papers/2507.15833](https://huggingface.co/papers/2507.15833)
|
| 25 |
+
- **Code:** [https://github.com/ian-chuang/gaze-av-aloha](https://github.com/ian-chuang/gaze-av-aloha)
|
| 26 |
- **License:** apache-2.0
|
| 27 |
|
| 28 |
## Dataset Structure
|
|
|
|
| 114 |
"has_audio": false
|
| 115 |
}
|
| 116 |
},
|
| 117 |
+
\"observation.images.wrist_cam_right\": {
|
| 118 |
+
\"dtype\": \"video\",
|
| 119 |
+
\"shape\": [
|
| 120 |
480,
|
| 121 |
640,
|
| 122 |
3
|
| 123 |
],
|
| 124 |
+
\"names\": [
|
| 125 |
+
\"height\",
|
| 126 |
+
\"width\",
|
| 127 |
+
\"channel\"
|
| 128 |
],
|
| 129 |
+
\"info\": {
|
| 130 |
+
\"video.height\": 480,
|
| 131 |
+
\"video.width\": 640,
|
| 132 |
+
\"video.codec\": \"av1\",
|
| 133 |
+
\"video.pix_fmt\": \"yuv420p\",
|
| 134 |
+
\"video.is_depth_map\": false,
|
| 135 |
+
\"video.fps\": 25,
|
| 136 |
+
\"video.channels\": 3,
|
| 137 |
+
\"has_audio\": false
|
| 138 |
}
|
| 139 |
},
|
| 140 |
+
\"observation.images.overhead_cam\": {
|
| 141 |
+
\"dtype\": \"video\",
|
| 142 |
+
\"shape\": [
|
| 143 |
480,
|
| 144 |
640,
|
| 145 |
3
|
| 146 |
],
|
| 147 |
+
\"names\": [
|
| 148 |
+
\"height\",
|
| 149 |
+
\"width\",
|
| 150 |
+
\"channel\"
|
| 151 |
],
|
| 152 |
+
\"info\": {
|
| 153 |
+
\"video.height\": 480,
|
| 154 |
+
\"video.width\": 640,
|
| 155 |
+
\"video.codec\": \"av1\",
|
| 156 |
+
\"video.pix_fmt\": \"yuv420p\",
|
| 157 |
+
\"video.is_depth_map\": false,
|
| 158 |
+
\"video.fps\": 25,
|
| 159 |
+
\"video.channels\": 3,
|
| 160 |
+
\"has_audio\": false
|
| 161 |
}
|
| 162 |
},
|
| 163 |
+
\"observation.images.worms_eye_cam\": {
|
| 164 |
+
\"dtype\": \"video\",
|
| 165 |
+
\"shape\": [
|
| 166 |
480,
|
| 167 |
640,
|
| 168 |
3
|
| 169 |
],
|
| 170 |
+
\"names\": [
|
| 171 |
+
\"height\",
|
| 172 |
+
\"width\",
|
| 173 |
+
\"channel\"
|
| 174 |
],
|
| 175 |
+
\"info\": {
|
| 176 |
+
\"video.height\": 480,
|
| 177 |
+
\"video.width\": 640,
|
| 178 |
+
\"video.codec\": \"av1\",
|
| 179 |
+
\"video.pix_fmt\": \"yuv420p\",
|
| 180 |
+
\"video.is_depth_map\": false,
|
| 181 |
+
\"video.fps\": 25,
|
| 182 |
+
\"video.channels\": 3,
|
| 183 |
+
\"has_audio\": false
|
| 184 |
}
|
| 185 |
},
|
| 186 |
+
\"observation.state\": {
|
| 187 |
"dtype": "float32",
|
| 188 |
"shape": [
|
| 189 |
21
|
|
|
|
| 278 |
}
|
| 279 |
```
|
| 280 |
|
| 281 |
+
## Sample Usage
|
| 282 |
+
|
| 283 |
+
This dataset is provided in LeRobot format for ease of sharing and visualization. For faster access during training, it is recommended to convert the dataset to a custom `AVAlohaDataset` format based on Zarr.
|
| 284 |
+
|
| 285 |
+
1. **Install Dependencies:**
|
| 286 |
+
First, ensure you have the `lerobot` library and other necessary dependencies installed as described in the official GitHub repository.
|
| 287 |
+
|
| 288 |
+
```bash
|
| 289 |
+
# Install LeRobot (if not already installed)
|
| 290 |
+
pip install git+https://github.com/huggingface/lerobot.git
|
| 291 |
+
|
| 292 |
+
# Clone the gaze-av-aloha repository for scripts and set up environment
|
| 293 |
+
git clone https://github.com/ian-chuang/gaze-av-aloha.git
|
| 294 |
+
cd gaze-av-aloha
|
| 295 |
+
# Follow additional installation steps from the repo's README, e.g., conda env setup
|
| 296 |
+
conda create -n gaze python=3.10
|
| 297 |
+
conda activate gaze
|
| 298 |
+
pip install -e ./gym_av_aloha
|
| 299 |
+
pip install -e ./gaze_av_aloha
|
| 300 |
+
```
|
| 301 |
+
|
| 302 |
+
2. **Convert Dataset to Zarr Format:**
|
| 303 |
+
Use the conversion script provided in the GitHub repository to convert this dataset to the Zarr format:
|
| 304 |
+
|
| 305 |
+
```bash
|
| 306 |
+
python gym_av_aloha/scripts/convert_lerobot_to_avaloha.py --repo_id iantc104/av_aloha_sim_peg_insertion
|
| 307 |
+
```
|
| 308 |
+
|
| 309 |
+
Converted datasets will be saved under `gym_av_aloha/outputs/`.
|
| 310 |
+
|
| 311 |
+
For more detailed usage, including training and evaluating policies, please refer to the [project's GitHub repository](https://github.com/ian-chuang/gaze-av-aloha).
|
| 312 |
|
| 313 |
## Citation
|
| 314 |
|
| 315 |
**BibTeX:**
|
| 316 |
|
| 317 |
```bibtex
|
| 318 |
+
@misc{chuang2025lookfocusactefficient,
|
| 319 |
+
title={Look, Focus, Act: Efficient and Robust Robot Learning via Human Gaze and Foveated Vision Transformers},
|
| 320 |
+
author={Ian Chuang and Andrew Lee and Dechen Gao and Jinyu Zou and Iman Soltani},
|
| 321 |
+
year={2025},
|
| 322 |
+
eprint={2507.15833},
|
| 323 |
+
archivePrefix={arXiv},
|
| 324 |
+
primaryClass={cs.RO},
|
| 325 |
+
url={https://arxiv.org/abs/2507.15833},
|
| 326 |
+
}
|
| 327 |
```
|