Improve dataset card: Add metadata, abstract, sample usage, image, and explicit license
#2
by
nielsr HF Staff - opened
README.md
CHANGED
|
@@ -1,10 +1,100 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
# Dataset and checkpoints for "AnyPlace: Learning Generalized Object Placement for Robot Manipulation"
|
| 2 |
|
| 3 |
[Website](https://any-place.github.io/) | [Paper](https://www.arxiv.org/abs/2502.04531) | [Code](https://github.com/ac-rad/anyplace)
|
| 4 |
|
| 5 |
-
##
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 6 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 7 |
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 8 |
@inproceedings{
|
| 9 |
zhao2025anyplace,
|
| 10 |
title={AnyPlace: Learning Generalizable Object Placement for Robot Manipulation},
|
|
@@ -13,4 +103,4 @@ booktitle={9th Annual Conference on Robot Learning},
|
|
| 13 |
year={2025},
|
| 14 |
url={https://openreview.net/forum?id=H0zFqW6QM0}
|
| 15 |
}
|
| 16 |
-
```
|
|
|
|
| 1 |
+
---
|
| 2 |
+
task_categories:
|
| 3 |
+
- robotics
|
| 4 |
+
license: mit
|
| 5 |
+
---
|
| 6 |
+
|
| 7 |
# Dataset and checkpoints for "AnyPlace: Learning Generalized Object Placement for Robot Manipulation"
|
| 8 |
|
| 9 |
[Website](https://any-place.github.io/) | [Paper](https://www.arxiv.org/abs/2502.04531) | [Code](https://github.com/ac-rad/anyplace)
|
| 10 |
|
| 11 |
+
## Introduction
|
| 12 |
+
|
| 13 |
+
Object placement in robotic tasks is inherently challenging due to the diversity of object geometries and placement configurations. To address this, we propose AnyPlace, a two-stage method trained entirely on synthetic data, capable of predicting a wide range of feasible placement poses for real-world tasks. Our key insight is that by leveraging a Vision-Language Model (VLM) to identify rough placement locations, we focus only on the relevant regions for local placement, which enables us to train the low-level placement-pose-prediction model to capture diverse placements efficiently. For training, we generate a fully synthetic dataset of randomly generated objects in different placement configurations (insertion, stacking, hanging) and train local placement-prediction models. We conduct extensive evaluations in simulation, demonstrating that our method outperforms baselines in terms of success rate, coverage of possible placement modes, and precision. In real-world experiments, we show how our approach directly transfers models trained purely on synthetic data to the real world, where it successfully performs placements in scenarios where other models struggle -- such as with varying object geometries, diverse placement modes, and achieving high precision for fine placement.
|
| 14 |
+
|
| 15 |
+
<div align="center">
|
| 16 |
+
<img src="https://github.com/ac-rad/anyplace/blob/main/anyplace_workflow.png" alt="AnyPlace Workflow" width="100%">
|
| 17 |
+
</div>
|
| 18 |
+
|
| 19 |
+
## Sample Usage
|
| 20 |
+
|
| 21 |
+
This repository contains the synthetic training dataset and evaluation dataset (including object USD files, RGBD images, and object pointclouds) used by AnyPlace. Below are instructions for setting up the environment, training models, and performing evaluations using this dataset, as detailed in the [official GitHub repository](https://github.com/ac-rad/anyplace).
|
| 22 |
+
|
| 23 |
+
### Installation
|
| 24 |
+
First, set up the environment for AnyPlace Low-level Pose Prediction Models:
|
| 25 |
+
|
| 26 |
+
```bash
|
| 27 |
+
conda create -n anyplace python=3.8
|
| 28 |
+
conda activate anyplace
|
| 29 |
+
|
| 30 |
+
pip install -r base_requirements.txt
|
| 31 |
+
pip install -e .
|
| 32 |
+
pip install torch==1.13.1+cu117 torchvision==0.14.1+cu117 torchaudio==0.13.1 --extra-index-url https://download.pytorch.org/whl/cu117
|
| 33 |
+
```
|
| 34 |
+
|
| 35 |
+
Then, install `torch-scatter`/`torch-cluster` / `knn_cuda` packages:
|
| 36 |
+
|
| 37 |
+
```bash
|
| 38 |
+
pip install torch-scatter -f https://data.pyg.org/whl/torch-1.13.0+cu117.html --no-index
|
| 39 |
+
pip install torch-cluster -f https://data.pyg.org/whl/torch-1.13.0+cu117.html --no-index
|
| 40 |
+
pip install https://github.com/unlimblue/KNN_CUDA/releases/download/0.2/KNN_CUDA-0.2-py3-none-any.whl
|
| 41 |
+
```
|
| 42 |
+
|
| 43 |
+
Finally, update and source the setup script to set environment variables:
|
| 44 |
+
|
| 45 |
+
```bash
|
| 46 |
+
source anyplace_env.sh
|
| 47 |
+
```
|
| 48 |
+
|
| 49 |
+
### Training
|
| 50 |
+
1. **Download the AnyPlace synthetic dataset** (this dataset) from Hugging Face.
|
| 51 |
+
2. Configure [wandb](https://docs.wandb.ai/quickstart/) on your machine.
|
| 52 |
+
3. Run the following commands to launch single-task and multi-task training:
|
| 53 |
|
| 54 |
+
```bash
|
| 55 |
+
# for single-task training
|
| 56 |
+
cd training/
|
| 57 |
+
python train_full.py -c anyplace_cfgs/vial_inserting/anyplace_diffusion_molmocrop.yaml # config files for different tasks can be found under config/train_cfgs/anyplace_cfgs
|
| 58 |
+
|
| 59 |
+
# for multi-task training
|
| 60 |
+
cd training/
|
| 61 |
+
python train_full.py -c anyplace_cfgs/multitask/anyplace_diffusion_molmocrop_mt.yaml
|
| 62 |
+
```
|
| 63 |
+
|
| 64 |
+
### Evaluation
|
| 65 |
+
For evaluation, first obtain the predicted placement poses by running AnyPlace models, then execute the predicted placements using our IsaacLab Pick and Place pipeline.
|
| 66 |
+
|
| 67 |
+
#### Placement Pose Prediction
|
| 68 |
+
1. Setup the meshcat visualizer to visualize the object pointclouds at each diffusion step:
|
| 69 |
+
|
| 70 |
+
```bash
|
| 71 |
+
meshcat-server # use port 7000 by default
|
| 72 |
```
|
| 73 |
+
|
| 74 |
+
2. **Download the AnyPlace evaluation dataset** (this dataset) on Hugging Face.
|
| 75 |
+
3. Update file path in config files and then run the AnyPlace model by:
|
| 76 |
+
|
| 77 |
+
```bash
|
| 78 |
+
cd eval/
|
| 79 |
+
python evaluate_official.py -c anyplace_eval/vial_inserting/anyplace_diffusion_molmocrop_multitask.yaml # config files for different tasks can be found under config/full_eval_cfgs/anyplace_eval
|
| 80 |
+
```
|
| 81 |
+
|
| 82 |
+
4. To visualize pointclouds at their final predicted placement poses, first update the data folder path in the [`visualize_placement.py`](https://github.com/ac-rad/anyplace/blob/main/anyplace_model/eval/visualize_placement.py) and then run:
|
| 83 |
+
|
| 84 |
+
```bash
|
| 85 |
+
cd eval/
|
| 86 |
+
python visualize_placement.py
|
| 87 |
+
```
|
| 88 |
+
|
| 89 |
+
#### IsaacLab Pick and Place Evaluation
|
| 90 |
+
Follow instructions [here](https://github.com/ac-rad/anyplace/blob/main/anyplace_isaaclab_pick_place/README.md) to run the AnyPlace IsaacLab Pick and Place evaluation pipeline.
|
| 91 |
+
|
| 92 |
+
## License
|
| 93 |
+
This dataset is released under the [MIT license](https://github.com/ac-rad/anyplace/blob/main/LICENSE).
|
| 94 |
+
|
| 95 |
+
## Citation
|
| 96 |
+
|
| 97 |
+
```bibtex
|
| 98 |
@inproceedings{
|
| 99 |
zhao2025anyplace,
|
| 100 |
title={AnyPlace: Learning Generalizable Object Placement for Robot Manipulation},
|
|
|
|
| 103 |
year={2025},
|
| 104 |
url={https://openreview.net/forum?id=H0zFqW6QM0}
|
| 105 |
}
|
| 106 |
+
```
|