|
|
--- |
|
|
license: apache-2.0 |
|
|
task_categories: |
|
|
- robotics |
|
|
--- |
|
|
|
|
|
<div align="center"> |
|
|
<h1> ALMI-X </h1> |
|
|
</div> |
|
|
<h5 align="center"> |
|
|
<a href="https://almi-humanoid.github.io/">🌍website</a>  <a href="https://github.com/TeleHuman/ALMI-Open/">📊code</a>   <a href="https://arxiv.org/abs/2504.14305">📖paper</a> |
|
|
</h5> |
|
|
|
|
|
 |
|
|
|
|
|
|
|
|
# Overview |
|
|
We release a large-scale whole-body motion control dataset - ALMI-X, featuring high-quality episodic trajectories from MuJoCo simulations deployable on real robots, based on our humanoid control policy - ALMI. |
|
|
|
|
|
# Dataset Instruction |
|
|
We collect ALMI-X dataset in MuJoCo simulation by running the trained ALMI policy. In this simulation, we combine a diverse range of upper-body motions with omnidirectional lower-body commands, and employ a pre-defined paradigm to generate corresponding linguistic descriptions for each combination. (i) For the upper-body, we collect data using our upper-body policy to track various motions from a subset of the AMASS dataset, where we remove entries with indistinct movements or those that could not be matched with the lower-body commands, such as `push from behind`. |
|
|
(ii) For the lower-body, we first categorize command directions into several types according to different combination of linear and angular velocity command and define 3 difficulty levels for command magnitudes, then the lower-body command is set by combining direction types and difficulty levels. |
|
|
Overall, each upper-body motion from the AMASS subset is paired with a specific direction type and a difficulty level serving as the inputs of policy to control the robot. In addition, trajectories in which the lower body `stand still` while the upper body tracks motions are also incorporated into the dataset. Each language description in ALMI-X is organized as `"[movement mode] [direction] [velocity level] and `motion`"}, each of which corresponds to the data collected from a trajectory lasting about 4 seconds with 200 steps. For each trajectory$, we run two policies (i.e., lower policy and upper policy) based on the commands obtained from the aforementioned combinations to achieve humanoid whole-body control. |
|
|
|
|
|
# How to Use Dataset |
|
|
|
|
|
- We release all of the text description data `text.tar.gz`; the trajectory data `data.tar.gz` with robot states, actions, DoF position, global position and global orientation informations. |
|
|
- We release the train set split `train.txt` |
|
|
|
|
|
Here we offer a simple demo code to introduce the data formats in the dataset: |
|
|
|
|
|
``` python |
|
|
import numpy as np |
|
|
|
|
|
data = np.load("data_path"+"/xxx.npy", allow_pickle=True) |
|
|
data.item()['obs'] # [frame_nums, 71] |
|
|
data.item()['actions'] # [frame_nums, 21] |
|
|
data.item()['dof_pos'] # [frame_nums, 21] |
|
|
data.item()['root_trans'] # [frame_nums, 3] |
|
|
data.item()['root_rot'] # [frame_nums, 4] |
|
|
|
|
|
``` |
|
|
|
|
|
# Dataset Statistics |
|
|
Percentage of steps for different categories of motions before and after data augmentation. |
|
|
<br> |
|
|
 |
|
|
The visualization of $x-y$ coordinates of the robot for each step in the dataset. We down-sample the data for visualization. |
|
|
<br> |
|
|
 |
|
|
|
|
|
# Dataset Collection Pipeline |
|
|
|
|
|
We release our datase collection code at our github repository: <a href="https://github.com/TeleHuman/ALMI-Open/tree/master/Data_Collection">Data_Collection</a>. |
|
|
|
|
|
# Citation |
|
|
|
|
|
If you find our work helpful, please cite us: |
|
|
|
|
|
```bibtex |
|
|
@misc{shi2025almi, |
|
|
title={Adversarial Locomotion and Motion Imitation for Humanoid Policy Learning}, |
|
|
author={Jiyuan Shi and Xinzhe Liu and Dewei Wang and Ouyang Lu and Sören Schwertfeger and Fuchun Sun and Chenjia Bai and Xuelong Li}, |
|
|
year={2025}, |
|
|
eprint={2504.14305}, |
|
|
archivePrefix={arXiv}, |
|
|
primaryClass={cs.RO}, |
|
|
url={https://arxiv.org/abs/2504.14305}, |
|
|
} |
|
|
} |
|
|
``` |
|
|
|