Improve AimBot dataset card: Add paper, task category, and usage info
#1
by nielsr HF Staff - opened
README.md
CHANGED
|
@@ -1,3 +1,116 @@
|
|
| 1 |
-
---
|
| 2 |
-
license: apache-2.0
|
| 3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
license: apache-2.0
|
| 3 |
+
task_categories:
|
| 4 |
+
- robotics
|
| 5 |
+
---
|
| 6 |
+
|
| 7 |
+
# AimBot Dataset: Auxiliary Visual Cue for Visuomotor Policies
|
| 8 |
+
|
| 9 |
+
This repository provides access to the datasets and resources related to the paper "[AimBot: A Simple Auxiliary Visual Cue to Enhance Spatial Awareness of Visuomotor Policies](https://huggingface.co/papers/2508.08113)".
|
| 10 |
+
|
| 11 |
+
AimBot proposes a lightweight visual augmentation technique that provides explicit spatial cues to improve visuomotor policy learning in robotic manipulation. It overlays shooting lines and scope reticles onto multi-view RGB images, offering auxiliary visual guidance that encodes the end-effector's state. The overlays are computed from depth images, camera extrinsics, and the current end-effector pose, explicitly conveying spatial relationships between the gripper and objects in the scene. AimBot incurs minimal computational overhead and requires no changes to model architectures, as it simply replaces original RGB images with augmented counterparts.
|
| 12 |
+
|
| 13 |
+
* **Project Page**: [https://aimbot-reticle.github.io/](https://aimbot-reticle.github.io/)
|
| 14 |
+
* **Official Code Repository**: [https://github.com/aimbot-reticle/AimBot-Pi0](https://github.com/aimbot-reticle/AimBot-Pi0)
|
| 15 |
+
|
| 16 |
+
The core `lerobot`-formatted LIBERO data augmented by AimBot for simulation experiments and real-world robot experiments can be found in the `Yinpei/lerobot_data_collection` Hugging Face dataset:
|
| 17 |
+
* **Simulation Data**: [https://huggingface.co/datasets/Yinpei/lerobot_data_collection/tree/main](https://huggingface.co/datasets/Yinpei/lerobot_data_collection/tree/main)
|
| 18 |
+
* **Real-World Data**: [https://huggingface.co/datasets/Yinpei/lerobot_data_collection/tree/realrobot/realrobot_all_tasks_reticle](https://huggingface.co/datasets/Yinpei/lerobot_data_collection/tree/realrobot/realrobot_all_tasks_reticle)
|
| 19 |
+
|
| 20 |
+
## Sample Usage
|
| 21 |
+
|
| 22 |
+
To get started with AimBot data and models, follow the instructions below adapted from the [official code repository](https://github.com/aimbot-reticle/AimBot-Pi0).
|
| 23 |
+
|
| 24 |
+
### Installation
|
| 25 |
+
|
| 26 |
+
This repository (`AimBot-Pi0`) is adapted from the original [OpenPi](https://github.com/Physical-Intelligence/openpi/tree/main) codebase. First, clone this repository with submodules and install `openpi` and `AimBot` packages:
|
| 27 |
+
|
| 28 |
+
```bash
|
| 29 |
+
git clone --recurse-submodules https://github.com/aimbot-reticle/AimBot-Pi0.git
|
| 30 |
+
cd AimBot-Pi0
|
| 31 |
+
|
| 32 |
+
# Install openpi packages (assuming uv is installed as per original repo)
|
| 33 |
+
GIT_LFS_SKIP_SMUDGE=1 uv sync
|
| 34 |
+
GIT_LFS_SKIP_SMUDGE=1 uv pip install -e .
|
| 35 |
+
|
| 36 |
+
# Install AimBot package, located within third_party
|
| 37 |
+
cd third_party/AimBot
|
| 38 |
+
pip install -e src
|
| 39 |
+
export PYTHONPATH=$PYTHONPATH:$(pwd)/src
|
| 40 |
+
cd ../../ # Go back to the main AimBot-Pi0 directory
|
| 41 |
+
```
|
| 42 |
+
|
| 43 |
+
Set up environment variables accordingly:
|
| 44 |
+
```bash
|
| 45 |
+
export LEROBOT_HOME=... # Path to your LeRobot datasets (e.g., where you'll store downloaded data)
|
| 46 |
+
export OPENPI_DATA_HOME=... # Same as LEROBOT_HOME or a specific cache directory
|
| 47 |
+
export LIBERO_CONFIG_PATH=... # Path to LIBERO configs if using those, e.g., third_party/libero/configs
|
| 48 |
+
```
|
| 49 |
+
|
| 50 |
+
Download pre-trained checkpoints (if desired for evaluation):
|
| 51 |
+
```bash
|
| 52 |
+
mkdir runs
|
| 53 |
+
git lfs install
|
| 54 |
+
git clone git@hf.co:Yinpei/runs_ckpt runs/ckpts
|
| 55 |
+
# Untar checkpoints if they are in .tar format: tar -xvf xxx.tar
|
| 56 |
+
```
|
| 57 |
+
|
| 58 |
+
### Data Download
|
| 59 |
+
|
| 60 |
+
The `lerobot`-formatted datasets augmented by AimBot for simulation and real-world experiments are available on Hugging Face Hub. You can download them using `git lfs` or the `huggingface_hub` library. For example, to clone the simulation data:
|
| 61 |
+
|
| 62 |
+
```bash
|
| 63 |
+
git lfs install
|
| 64 |
+
git clone https://huggingface.co/datasets/Yinpei/lerobot_data_collection --branch main
|
| 65 |
+
# For real-world data: git clone https://huggingface.co/datasets/Yinpei/lerobot_data_collection --branch realrobot
|
| 66 |
+
```
|
| 67 |
+
|
| 68 |
+
### Evaluation (Simulation)
|
| 69 |
+
|
| 70 |
+
To evaluate AimBot augmented Pi0 and Pi0-FAST policies in simulation:
|
| 71 |
+
|
| 72 |
+
1. **Terminal 1 (openpi env)**: Start the policy server.
|
| 73 |
+
* For Pi0:
|
| 74 |
+
```bash
|
| 75 |
+
python scripts/serve_policy.py --port 8001 --lerobot-repo-id large_crosshair_dynamic_default_color policy:checkpoint --policy.config=pi0_libero --policy.dir=runs/ckpts/pi0_libero/final-pi0-libero-large_crosshair_dynamic_default_color/29999
|
| 76 |
+
```
|
| 77 |
+
* For Pi0-FAST:
|
| 78 |
+
```bash
|
| 79 |
+
python scripts/serve_policy.py --port 8002 --lerobot-repo-id large_crosshair_dynamic_default_color policy:checkpoint --policy.config=pi0_fast_libero --policy.dir=runs/ckpts/pi0_fast_libero/final-pi0-fast-libero-large_crosshair_dynamic_default_color/29999
|
| 80 |
+
```
|
| 81 |
+
|
| 82 |
+
2. **Terminal 2 (libero env)**: Evaluate using the `libero` environment.
|
| 83 |
+
```bash
|
| 84 |
+
export PYTHONPATH=$PYTHONPATH:$PWD/third_party/libero
|
| 85 |
+
python examples/libero/eval_libero_aimbot.py --model-name eval_libero_<pi0/pi0_fast> --task_suite_name libero_10,libero_goal --port 8001/8002
|
| 86 |
+
```
|
| 87 |
+
|
| 88 |
+
### Training
|
| 89 |
+
|
| 90 |
+
To train on the AimBot augmented `lerobot` datasets (e.g., `modified_libero_reticle` from `Yinpei/lerobot_data_collection`):
|
| 91 |
+
|
| 92 |
+
1. **Compute normalization stats**:
|
| 93 |
+
```bash
|
| 94 |
+
python scripts/compute_norm_stats.py --config-name pi0_libero --lerobot-repo-id modified_libero_reticle
|
| 95 |
+
```
|
| 96 |
+
|
| 97 |
+
2. **Start training**:
|
| 98 |
+
```bash
|
| 99 |
+
XLA_PYTHON_CLIENT_MEM_FRACTION=0.99 python scripts/train.py pi0_libero --exp-name=my_aimbot_exp --batch-size=32 --overwrite --lerobot_repo_id modified_libero_reticle
|
| 100 |
+
```
|
| 101 |
+
|
| 102 |
+
For detailed instructions and real-world experiment setups, please refer to the [original GitHub repository](https://github.com/aimbot-reticle/AimBot-Pi0).
|
| 103 |
+
|
| 104 |
+
---
|
| 105 |
+
## Citation
|
| 106 |
+
|
| 107 |
+
If you find this dataset or the AimBot work useful, please cite the paper:
|
| 108 |
+
|
| 109 |
+
```bibtex
|
| 110 |
+
@article{aimbot,
|
| 111 |
+
title={AimBot: A Simple Auxiliary Visual Cue to Enhance Spatial Awareness of Visuomotor Policies},
|
| 112 |
+
author={Dai, Yinpei and Lee, Jayjun and et al},
|
| 113 |
+
journal={CoRL},
|
| 114 |
+
year={2025},
|
| 115 |
+
}
|
| 116 |
+
```
|