Update README.md
Browse files
README.md
CHANGED
|
@@ -14,7 +14,7 @@ datasets:
|
|
| 14 |
|
| 15 |
## Model Description
|
| 16 |
|
| 17 |
-
TRME (Text Residual Motion Encoder) significantly enhances the process of generating 3D human motion from textual descriptions. It extends the Vector Quantized Variational Autoencoder (VQ-VAE) architecture by integrating additional residual blocks. This innovation enables the capture of finer motion details, facilitating the synthesis of more diverse and realistic human motions. This model is developed to assist in animation and virtual reality industries, among others.
|
| 18 |
|
| 19 |
**Key Features**:
|
| 20 |
- Utilizes enhanced VQ-VAE architecture.
|
|
@@ -59,6 +59,15 @@ This model has been trained on the following datasets:
|
|
| 59 |
|
| 60 |
## Acknowledgements
|
| 61 |
|
| 62 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 63 |
|
| 64 |
For more information, contributions, or questions, please visit our [project repository](https://github.com/YoushanZhang/AiAI/tree/main/3D%20Human%20Motion).
|
|
|
|
| 14 |
|
| 15 |
## Model Description
|
| 16 |
|
| 17 |
+
TRME (Text Residual Motion Encoder) significantly enhances the process of generating 3D human motion from textual descriptions. It extends the Vector Quantized Variational Autoencoder (VQ-VAE) architecture mentioned in T2M-GPT by integrating additional residual blocks. This innovation enables the capture of finer motion details, facilitating the synthesis of more diverse and realistic human motions. This model is developed to assist in animation and virtual reality industries, among others.
|
| 18 |
|
| 19 |
**Key Features**:
|
| 20 |
- Utilizes enhanced VQ-VAE architecture.
|
|
|
|
| 59 |
|
| 60 |
## Acknowledgements
|
| 61 |
|
| 62 |
+
@misc {vumichien_2023,
|
| 63 |
+
author = { {vumichien} },
|
| 64 |
+
title = { T2M-GPT (Revision e311a99) },
|
| 65 |
+
year = 2023,
|
| 66 |
+
url = { https://huggingface.co/vumichien/T2M-GPT },
|
| 67 |
+
doi = { 10.57967/hf/0341 },
|
| 68 |
+
publisher = { Hugging Face }
|
| 69 |
+
}
|
| 70 |
+
|
| 71 |
+
We would like to thank the contributors and researchers in 3D Human Motion Generation domain who provided insights and datasets which were invaluable in developing the TRME model. Special thanks to Dr. Youshan Zhang (Assistant Professor at Yeshiva University, NYC) for his guidance and expertise throughout the project.
|
| 72 |
|
| 73 |
For more information, contributions, or questions, please visit our [project repository](https://github.com/YoushanZhang/AiAI/tree/main/3D%20Human%20Motion).
|