Create README.md
Browse files
README.md
ADDED
|
@@ -0,0 +1,16 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
This repository contains the model of the paper IRG-MotionLLM: Interleaving Motion Generation, Assessment and Refinement for Text-to-Motion Generation.
|
| 2 |
+
|
| 3 |
+
Code: https://github.com/HumanMLLM/IRG-MotionLLM
|
| 4 |
+
|
| 5 |
+
Our models are under the License CC-BY-NC-SA 4.0. Our models can only be used for research purpose. Our models are developed on [Gemma2-2B-IT](https://huggingface.co/google/gemma-2-2b-it) under its [LICENSE](https://ai.google.dev/gemma/terms).
|
| 6 |
+
|
| 7 |
+
If you find our work helpful for your research, please consider citing our paper.
|
| 8 |
+
|
| 9 |
+
```
|
| 10 |
+
@article{li2025irg,
|
| 11 |
+
title={IRG-MotionLLM: Interleaving Motion Generation, Assessment and Refinement for Text-to-Motion Generation},
|
| 12 |
+
author={Li, Yuan-Ming and Yang, Qize and Lei, Nan and Fu, Shenghao and Zeng, Ling-An and Hu, Jian-Fang and Wei, Xihan and Zheng, Wei-Shi},
|
| 13 |
+
journal={arXiv preprint arXiv:2512.10730},
|
| 14 |
+
year={2025}
|
| 15 |
+
}
|
| 16 |
+
```
|