Datasets:

Modalities:
Image
Text
Formats:
text
ArXiv:
Libraries:
Datasets
License:
File size: 3,855 Bytes
afff36d
 
29cb49d
 
afff36d
 
29cb49d
 
 
 
d899e77
29cb49d
 
9a950f2
 
ee2c9aa
 
 
 
 
 
 
 
 
 
29cb49d
afff36d
29cb49d
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
39f91f2
29cb49d
 
39f91f2
29cb49d
 
 
b46d1e9
29cb49d
 
 
 
 
d899e77
9a156f6
 
 
 
 
 
 
 
 
29cb49d
d899e77
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
---
license: apache-2.0
task_categories:
- robotics
---

<div align="center">
<h1> ALMI-X </h1>
</div>
<h5 align="center">
    <a href="https://almi-humanoid.github.io/">🌍website</a>&nbsp <a href="https://github.com/TeleHuman/ALMI-Open/">📊code</a> &nbsp <a href="https://arxiv.org/abs/2504.14305">📖paper</a>
</h5>

![Alt text](asset/almi-x.png)


# Overview
We release a large-scale whole-body motion control dataset - ALMI-X, featuring high-quality episodic trajectories from MuJoCo simulations deployable on real robots, based on our humanoid control policy - ALMI.

# Dataset Instruction
We collect ALMI-X dataset in MuJoCo simulation by running the trained ALMI policy. In this simulation, we combine a diverse range of upper-body motions with omnidirectional lower-body commands, and employ a pre-defined paradigm to generate corresponding linguistic descriptions for each combination. (i) For the upper-body, we collect data using our upper-body policy to track various motions from a subset of the AMASS dataset, where we remove entries with indistinct movements or those that could not be matched with the lower-body commands, such as `push from behind`. 
(ii) For the lower-body, we first categorize command directions into several types according to different combination of linear and angular velocity command and define 3 difficulty levels for command magnitudes, then the lower-body command is set by combining direction types and difficulty levels. 
    Overall, each upper-body motion from the AMASS subset is paired with a specific direction type and a difficulty level serving as the inputs of policy to control the robot. In addition, trajectories in which the lower body `stand still` while the upper body tracks motions are also incorporated into the dataset. Each language description in ALMI-X is organized as `"[movement mode] [direction] [velocity level] and `motion`"}, each of which corresponds to the data collected from a trajectory lasting about 4 seconds with 200 steps. For each trajectory$, we run two policies (i.e., lower policy and upper policy) based on the commands obtained from the aforementioned combinations to achieve humanoid whole-body control.
 
# How to Use Dataset

 - We release all of the text description data `text.tar.gz`; the trajectory data `data.tar.gz` with robot states, actions, DoF position, global position and global orientation informations.
 - We release the train set split `train.txt`

Here we offer a simple demo code to introduce the data formats in the dataset:

``` python
import numpy as np

data = np.load("data_path"+"/xxx.npy", allow_pickle=True)
data.item()['obs'] # [frame_nums, 71]
data.item()['actions'] # [frame_nums, 21]
data.item()['dof_pos'] # [frame_nums, 21]
data.item()['root_trans'] # [frame_nums, 3]
data.item()['root_rot'] # [frame_nums, 4]

```

# Dataset Statistics
Percentage of steps for different categories of motions before and after data augmentation.
<br>
![Alt text](asset/text_expand.jpg)
The visualization of $x-y$ coordinates of the robot for each step in the dataset. We down-sample the data for visualization.
<br>
![Alt text](asset/traj_scatter.jpg)

# Dataset Collection Pipeline

We release our datase collection code at our github repository: <a href="https://github.com/TeleHuman/ALMI-Open/tree/master/Data_Collection">Data_Collection</a>.

# Citation

If you find our work helpful, please cite us:

```bibtex
      @misc{shi2025almi,
        title={Adversarial Locomotion and Motion Imitation for Humanoid Policy Learning}, 
        author={Jiyuan Shi and Xinzhe Liu and Dewei Wang and Ouyang Lu and Sören Schwertfeger and Fuchun Sun and Chenjia Bai and Xuelong Li},
        year={2025},
        eprint={2504.14305},
        archivePrefix={arXiv},
        primaryClass={cs.RO},
        url={https://arxiv.org/abs/2504.14305}, 
  }
}
```