task_categories:
- robotics
license: mit
Dataset and checkpoints for "AnyPlace: Learning Generalized Object Placement for Robot Manipulation"
Introduction
Object placement in robotic tasks is inherently challenging due to the diversity of object geometries and placement configurations. To address this, we propose AnyPlace, a two-stage method trained entirely on synthetic data, capable of predicting a wide range of feasible placement poses for real-world tasks. Our key insight is that by leveraging a Vision-Language Model (VLM) to identify rough placement locations, we focus only on the relevant regions for local placement, which enables us to train the low-level placement-pose-prediction model to capture diverse placements efficiently. For training, we generate a fully synthetic dataset of randomly generated objects in different placement configurations (insertion, stacking, hanging) and train local placement-prediction models. We conduct extensive evaluations in simulation, demonstrating that our method outperforms baselines in terms of success rate, coverage of possible placement modes, and precision. In real-world experiments, we show how our approach directly transfers models trained purely on synthetic data to the real world, where it successfully performs placements in scenarios where other models struggle -- such as with varying object geometries, diverse placement modes, and achieving high precision for fine placement.
Sample Usage
This repository contains the synthetic training dataset and evaluation dataset (including object USD files, RGBD images, and object pointclouds) used by AnyPlace. Below are instructions for setting up the environment, training models, and performing evaluations using this dataset, as detailed in the official GitHub repository.
Installation
First, set up the environment for AnyPlace Low-level Pose Prediction Models:
conda create -n anyplace python=3.8
conda activate anyplace
pip install -r base_requirements.txt
pip install -e .
pip install torch==1.13.1+cu117 torchvision==0.14.1+cu117 torchaudio==0.13.1 --extra-index-url https://download.pytorch.org/whl/cu117
Then, install torch-scatter/torch-cluster / knn_cuda packages:
pip install torch-scatter -f https://data.pyg.org/whl/torch-1.13.0+cu117.html --no-index
pip install torch-cluster -f https://data.pyg.org/whl/torch-1.13.0+cu117.html --no-index
pip install https://github.com/unlimblue/KNN_CUDA/releases/download/0.2/KNN_CUDA-0.2-py3-none-any.whl
Finally, update and source the setup script to set environment variables:
source anyplace_env.sh
Training
- Download the AnyPlace synthetic dataset (this dataset) from Hugging Face.
- Configure wandb on your machine.
- Run the following commands to launch single-task and multi-task training:
# for single-task training
cd training/
python train_full.py -c anyplace_cfgs/vial_inserting/anyplace_diffusion_molmocrop.yaml # config files for different tasks can be found under config/train_cfgs/anyplace_cfgs
# for multi-task training
cd training/
python train_full.py -c anyplace_cfgs/multitask/anyplace_diffusion_molmocrop_mt.yaml
Evaluation
For evaluation, first obtain the predicted placement poses by running AnyPlace models, then execute the predicted placements using our IsaacLab Pick and Place pipeline.
Placement Pose Prediction
- Setup the meshcat visualizer to visualize the object pointclouds at each diffusion step:
meshcat-server # use port 7000 by default
- Download the AnyPlace evaluation dataset (this dataset) on Hugging Face.
- Update file path in config files and then run the AnyPlace model by:
cd eval/
python evaluate_official.py -c anyplace_eval/vial_inserting/anyplace_diffusion_molmocrop_multitask.yaml # config files for different tasks can be found under config/full_eval_cfgs/anyplace_eval
- To visualize pointclouds at their final predicted placement poses, first update the data folder path in the
visualize_placement.pyand then run:
cd eval/
python visualize_placement.py
IsaacLab Pick and Place Evaluation
Follow instructions here to run the AnyPlace IsaacLab Pick and Place evaluation pipeline.
License
This dataset is released under the MIT license.
Citation
@inproceedings{
zhao2025anyplace,
title={AnyPlace: Learning Generalizable Object Placement for Robot Manipulation},
author={Yuchi Zhao and Miroslav Bogdanovic and Chengyuan Luo and Steven Tohme and Kourosh Darvish and Alan Aspuru-Guzik and Florian Shkurti and Animesh Garg},
booktitle={9th Annual Conference on Robot Learning},
year={2025},
url={https://openreview.net/forum?id=H0zFqW6QM0}
}