Update dataset card with paper, project, code links, citation, and extended description

#1
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +117 -5
README.md CHANGED
@@ -4,6 +4,7 @@ task_categories:
4
  - robotics
5
  tags:
6
  - LeRobot
 
7
  configs:
8
  - config_name: default
9
  data_files: data/*/*.parquet
@@ -13,11 +14,12 @@ This dataset was created using [LeRobot](https://github.com/huggingface/lerobot)
13
 
14
  ## Dataset Description
15
 
 
16
 
17
-
18
- - **Homepage:** [More Information Needed]
19
- - **Paper:** [More Information Needed]
20
- - **License:** apache-2.0
21
 
22
  ## Dataset Structure
23
 
@@ -272,11 +274,121 @@ This dataset was created using [LeRobot](https://github.com/huggingface/lerobot)
272
  }
273
  ```
274
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
275
 
276
  ## Citation
277
 
278
  **BibTeX:**
279
 
280
  ```bibtex
281
- [More Information Needed]
 
 
 
 
 
 
 
 
282
  ```
 
4
  - robotics
5
  tags:
6
  - LeRobot
7
+ library_name: LeRobot
8
  configs:
9
  - config_name: default
10
  data_files: data/*/*.parquet
 
14
 
15
  ## Dataset Description
16
 
17
+ This dataset is part of the work presented in the paper [Look, Focus, Act: Efficient and Robust Robot Learning via Human Gaze and Foveated Vision Transformers](https://huggingface.co/papers/2507.15833). It explores how incorporating human-like active gaze into robotic policies can enhance both efficiency and performance. The dataset includes human gaze (eye-tracking) and robot demonstration data collected using the AV-ALOHA robot simulation platform. It is designed for training robot policies that incorporate human gaze, enabling foveated image processing for reduced computational overhead and improved performance and robustness.
18
 
19
+ - **Homepage:** [https://ian-chuang.github.io/gaze-av-aloha/](https://ian-chuang.github.io/gaze-av-aloha/)
20
+ - **Paper:** [https://huggingface.co/papers/2507.15833](https://huggingface.co/papers/2507.15833)
21
+ - **Code:** [https://github.com/ian-chuang/gaze-av-aloha](https://github.com/ian-chuang/gaze-av-aloha)
22
+ - **License:** apache-2.0
23
 
24
  ## Dataset Structure
25
 
 
274
  }
275
  ```
276
 
277
+ ## Sample Usage
278
+
279
+ To download and preprocess the dataset, and then train policies, follow the instructions below from the [official GitHub repository](https://github.com/ian-chuang/gaze-av-aloha):
280
+
281
+ ### Download and Preprocess Dataset
282
+
283
+ We use the [LeRobot dataset format](https://github.com/huggingface/lerobot) for ease of sharing and visualization via Hugging Face. However, LeRobot's dataloader can be slow, so we convert each dataset into a custom `AVAlohaDataset` format based on **Zarr** for faster access during training.
284
+
285
+ **Available Dataset Repository IDs (for AV-ALOHA simulation datasets with human eye-tracking annotations):**
286
+
287
+ * `iantc104/av_aloha_sim_cube_transfer`
288
+ * `iantc104/av_aloha_sim_peg_insertion`
289
+ * `iantc104/av_aloha_sim_slot_insertion`
290
+ * `iantc104/av_aloha_sim_hook_package`
291
+ * `iantc104/av_aloha_sim_pour_test_tube`
292
+ * `iantc104/av_aloha_sim_thread_needle`
293
+
294
+ **Conversion Instructions:**
295
+
296
+ To convert a dataset to Zarr format, run the following command from the project root:
297
+
298
+ ```bash
299
+ python gym_av_aloha/scripts/convert_lerobot_to_avaloha.py --repo_id <dataset_repo_id>
300
+ ```
301
+
302
+ For example:
303
+
304
+ ```bash
305
+ python gym_av_aloha/scripts/convert_lerobot_to_avaloha.py --repo_id iantc104/av_aloha_sim_thread_needle
306
+ ```
307
+
308
+ Converted datasets will be saved under:
309
+
310
+ ```
311
+ gym_av_aloha/outputs/
312
+ ```
313
+
314
+ ### Train & Evaluate Policies
315
+
316
+ Train and evaluate policies using [`train.py`](https://github.com/ian-chuang/gaze-av-aloha/blob/main/gaze_av_aloha/scripts/train.py).
317
+
318
+ **Fov-Act (end-to-end gaze as action):**
319
+
320
+ ```bash
321
+ python gaze_av_aloha/scripts/train.py \
322
+ policy=foveated_vit_policy \
323
+ task=<task e.g. av_aloha_sim_thread_needle> \
324
+ policy.vision_encoder_kwargs.repo_id=iantc104/mae_vitb_foveated_vit \
325
+ policy.optimizer_lr_backbone=1e-5 \
326
+ wandb.enable=true \
327
+ wandb.project=<project name> \
328
+ wandb.entity=<your wandb entity> \
329
+ wandb.job_name=fov-act \
330
+ device=cuda
331
+ ```
332
+
333
+ **Fov-UNet (two-stage with pretrained gaze model):**
334
+
335
+ ```bash
336
+ python gaze_av_aloha/scripts/train.py \
337
+ policy=foveated_vit_policy \
338
+ task=<task e.g. av_aloha_sim_thread_needle> \
339
+ policy.use_gaze_as_action=false \
340
+ policy.gaze_model_repo_id=<gaze model e.g. iantc104/gaze_model_av_aloha_sim_thread_needle> \
341
+ policy.vision_encoder_kwargs.repo_id=iantc104/mae_vitb_foveated_vit \
342
+ policy.optimizer_lr_backbone=1e-5 \
343
+ wandb.enable=true \
344
+ wandb.project=<project name> \
345
+ wandb.entity=<your wandb entity> \
346
+ wandb.job_name=fov-unet \
347
+ device=cuda
348
+ ```
349
+
350
+ **Fine (full-res ViT baseline):**
351
+
352
+ ```bash
353
+ python gaze_av_aloha/scripts/train.py \
354
+ policy=vit_policy \
355
+ task=<task e.g. av_aloha_sim_thread_needle> \
356
+ policy.vision_encoder_kwargs.repo_id=iantc104/mae_vitb_vit \
357
+ policy.optimizer_lr_backbone=1e-5 \
358
+ wandb.enable=true \
359
+ wandb.project=<project name> \
360
+ wandb.entity=<your wandb entity> \
361
+ wandb.job_name=fine \
362
+ device=cuda
363
+ ```
364
+
365
+ **Coarse (low-res ViT baseline):**
366
+
367
+ ```bash
368
+ python gaze_av_aloha/scripts/train.py \
369
+ policy=low_res_vit_policy \
370
+ task=<task e.g. av_aloha_sim_thread_needle> \
371
+ policy.vision_encoder_kwargs.repo_id=iantc104/mae_vitb_low_res_vit \
372
+ policy.optimizer_lr_backbone=1e-5 \
373
+ wandb.enable=true \
374
+ wandb.project=<project name> \
375
+ wandb.entity=<your wandb entity> \
376
+ wandb.job_name=coarse \
377
+ device=cuda
378
+ ```
379
 
380
  ## Citation
381
 
382
  **BibTeX:**
383
 
384
  ```bibtex
385
+ @misc{chuang2025lookfocusactefficient,
386
+ title={Look, Focus, Act: Efficient and Robust Robot Learning via Human Gaze and Foveated Vision Transformers},
387
+ author={Ian Chuang and Andrew Lee and Dechen Gao and Jinyu Zou and Iman Soltani},
388
+ year={2025},
389
+ eprint={2507.15833},
390
+ archivePrefix={arXiv},
391
+ primaryClass={cs.RO},
392
+ url={https://arxiv.org/abs/2507.15833},
393
+ }
394
  ```