| Contents of datasets. | |
| 'param': All the parameters in the policy network as a flattened vector. | |
| 'traj': prior trajectories in first 60 steps, as 's_0, a_0, a_1, a_2, s_3, a_3, a_4, a_5'. | |
| 'task': the success three states 's_m, s_{m+1}, s_{m+2}' | |
| If you want to train with your dataset or task, you can privately design the trajectory dimensions and encode them to the same dimension (for example we used 128). | |
| You can use our pretrained model with the same behavior dimensions to finetune on your dataset. | |
| Cite arxiv.org/abs/2407.10973 | |
| ## 📝 Citation | |
| If you find our model or dataset useful, please consider citing as follows: | |
| ``` | |
| @article{liang2024make, | |
| title={Make-An-Agent: A Generalizable Policy Network Generator with Behavior-Prompted Diffusion}, | |
| author={Liang, Yongyuan and Xu, Tingqiang and Hu, Kaizhe and Jiang, Guangqi and Huang, Furong and Xu, Huazhe}, | |
| journal={arXiv preprint arXiv:2407.10973}, | |
| year={2024} | |
| } | |
| ``` | |