rsax commited on
Commit
c835f54
·
verified ·
1 Parent(s): 1eb9dff

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +48 -0
README.md ADDED
@@ -0,0 +1,48 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: gpl-3.0
3
+ language:
4
+ - en
5
+ ---
6
+ tags:
7
+ - text-to-motion
8
+ - human-motion-generation
9
+ - 3d-motion
10
+ datasets:
11
+ - CHAD
12
+ - HumanML3D
13
+ - AMASS
14
+ ---
15
+
16
+ ## Model Description
17
+
18
+ TRME (Text Residual Motion Encoder) significantly enhances the process of generating 3D human motion from textual descriptions. It extends the Vector Quantized Variational Autoencoder (VQ-VAE) architecture by integrating additional residual blocks. This innovation enables the capture of finer motion details, facilitating the synthesis of more diverse and realistic human motions. This model is developed to assist in animation and virtual reality industries, among others.
19
+
20
+ **Key Features**:
21
+ - Utilizes enhanced VQ-VAE architecture.
22
+ - Capable of detailed and complex motion synthesis.
23
+ - Optimized for high diversity and realism in generated motions.
24
+
25
+ ![Model Visualization](https://github.com/YoushanZhang/AiAI/blob/main/3D%20Human%20Motion/motions.png)
26
+
27
+ The model has been trained on a novel dataset, CHAD, which includes a comprehensive set of human motion data, enabling it to handle a wide variety of motion generation tasks. Learn more about the training process and model architecture [here](https://github.com/YoushanZhang/TRME).
28
+
29
+ ## Example Usage
30
+
31
+ Here is a demonstration of the model generating a complex human motion from a simple textual description:
32
+
33
+ "**A person performs a sequence of yoga poses transitioning smoothly from a standing position into a downward dog, followed by a warrior pose.**"
34
+
35
+ ![Generated Motion from Text](https://github.com/YoushanZhang/AiAI/blob/main/3D%20Human%20Motion/TRME_demo.gif)
36
+
37
+ ## Datasets
38
+
39
+ This model has been trained on the following datasets:
40
+ - **CHAD (Comprehensive Human Activity Dataset)**: An aggregation of motion capture data tailored to diverse activities, enhancing training and model robustness.
41
+ - **HumanML3D**: Provides diverse scenarios from daily activities to complex sports movements.
42
+ - **AMASS**: A large-scale motion capture dataset that provides an extensive set of human movements and poses.
43
+
44
+ ## Acknowledgements
45
+
46
+ We would like to thank the contributors and researchers who provided insights and datasets which were invaluable in developing the TRME model. Special thanks to Dr. Youshan Zhang (Assistant Professor at Yeshiva University, NYC) for his guidance and expertise throughout the project.
47
+
48
+ For more information, contributions, or questions, please visit our [project repository](https://github.com/YoushanZhang/AiAI/tree/main/3D%20Human%20Motion).