Update dataset card with paper link, project page, citation, library name and tags

#1
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +20 -6
README.md CHANGED
@@ -4,6 +4,12 @@ task_categories:
4
  - robotics
5
  tags:
6
  - LeRobot
 
 
 
 
 
 
7
  configs:
8
  - config_name: default
9
  data_files: data/*/*.parquet
@@ -11,12 +17,13 @@ configs:
11
 
12
  This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
13
 
14
- ## Dataset Description
15
-
16
 
 
17
 
18
- - **Homepage:** [More Information Needed]
19
- - **Paper:** [More Information Needed]
 
20
  - **License:** apache-2.0
21
 
22
  ## Dataset Structure
@@ -258,11 +265,18 @@ This dataset was created using [LeRobot](https://github.com/huggingface/lerobot)
258
  }
259
  ```
260
 
261
-
262
  ## Citation
263
 
264
  **BibTeX:**
265
 
266
  ```bibtex
267
- [More Information Needed]
 
 
 
 
 
 
 
 
268
  ```
 
4
  - robotics
5
  tags:
6
  - LeRobot
7
+ - gaze
8
+ - foveated-vision
9
+ - imitation-learning
10
+ - human-demonstrations
11
+ - av-aloha
12
+ library_name: lerobot
13
  configs:
14
  - config_name: default
15
  data_files: data/*/*.parquet
 
17
 
18
  This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
19
 
20
+ This dataset provides human eye-tracking data and robot demonstrations collected using the AV-ALOHA simulation platform, designed to train robot policies that incorporate human gaze. It is associated with the paper \"[Look, Focus, Act: Efficient and Robust Robot Learning via Human Gaze and Foveated Vision Transformers](https://huggingface.co/papers/2507.15833)\".
 
21
 
22
+ ## Dataset Description
23
 
24
+ - **Homepage:** https://ian-chuang.github.io/gaze-av-aloha/
25
+ - **Paper:** https://huggingface.co/papers/2507.15833
26
+ - **Code:** https://github.com/ian-chuang/gaze-av-aloha
27
  - **License:** apache-2.0
28
 
29
  ## Dataset Structure
 
265
  }
266
  ```
267
 
 
268
  ## Citation
269
 
270
  **BibTeX:**
271
 
272
  ```bibtex
273
+ @misc{chuang2025lookfocusactefficient,
274
+ title={Look, Focus, Act: Efficient and Robust Robot Learning via Human Gaze and Foveated Vision Transformers},
275
+ author={Ian Chuang and Andrew Lee and Dechen Gao and Jinyu Zou and Iman Soltani},
276
+ year={2025},
277
+ eprint={2507.15833},
278
+ archivePrefix={arXiv},
279
+ primaryClass={cs.RO},
280
+ url={https://arxiv.org/abs/2507.15833},
281
+ }
282
  ```