Update README.md
Browse files
README.md
CHANGED
|
@@ -24,7 +24,7 @@ TRME (Text Residual Motion Encoder) significantly enhances the process of genera
|
|
| 24 |
![Model Visualization]
|
| 25 |
|
| 26 |
<figure>
|
| 27 |
-
<img src="https://huggingface.co/rsax/trme/
|
| 28 |
<figcaption>Generated Sequences for MOYO and HumanML3D from T2M-GPT and TRME. The figure provides detailed visualizations of motion sequences generated by our models. The generated motions correspond to two different captions extracted from the HumanML3D and MOYO datasets. Notably, TRME demonstrates superior performance in capturing dependencies across diverse motion classes compared to state-of-the-art models.
|
| 29 |
</figcaption>
|
| 30 |
</figure>
|
|
@@ -34,7 +34,7 @@ The model has been trained on a novel dataset, CHAD, which includes a comprehens
|
|
| 34 |
Learn more about the training process and model architecture below:
|
| 35 |
|
| 36 |
<figure>
|
| 37 |
-
<img src="https://huggingface.co/rsax/trme/
|
| 38 |
<figcaption>Data flow diagram for the TRME model, highlighting the progression from the AMASS database to the creation of the CHAD dataset and subsequent motion generation
|
| 39 |
</figcaption>
|
| 40 |
</figure>
|
|
@@ -44,7 +44,7 @@ Learn more about the training process and model architecture below:
|
|
| 44 |
Here is a demonstration of the model generating a complex human motion from a simple textual description:
|
| 45 |
|
| 46 |
<figure>
|
| 47 |
-
<img src="https://huggingface.co/rsax/trme/
|
| 48 |
<figcaption>A person is doing a tree pose.
|
| 49 |
</figcaption>
|
| 50 |
</figure>
|
|
|
|
| 24 |
![Model Visualization]
|
| 25 |
|
| 26 |
<figure>
|
| 27 |
+
<img src="https://huggingface.co/rsax/trme/resolve/main/motions.png" alt="TRME">
|
| 28 |
<figcaption>Generated Sequences for MOYO and HumanML3D from T2M-GPT and TRME. The figure provides detailed visualizations of motion sequences generated by our models. The generated motions correspond to two different captions extracted from the HumanML3D and MOYO datasets. Notably, TRME demonstrates superior performance in capturing dependencies across diverse motion classes compared to state-of-the-art models.
|
| 29 |
</figcaption>
|
| 30 |
</figure>
|
|
|
|
| 34 |
Learn more about the training process and model architecture below:
|
| 35 |
|
| 36 |
<figure>
|
| 37 |
+
<img src="https://huggingface.co/rsax/trme/resolve/main/overview.png" alt="overview", width="425", height=480/>
|
| 38 |
<figcaption>Data flow diagram for the TRME model, highlighting the progression from the AMASS database to the creation of the CHAD dataset and subsequent motion generation
|
| 39 |
</figcaption>
|
| 40 |
</figure>
|
|
|
|
| 44 |
Here is a demonstration of the model generating a complex human motion from a simple textual description:
|
| 45 |
|
| 46 |
<figure>
|
| 47 |
+
<img src="https://huggingface.co/rsax/trme/resolve/main/illustration_10.png" alt="illustration", width="425", height=480/>
|
| 48 |
<figcaption>A person is doing a tree pose.
|
| 49 |
</figcaption>
|
| 50 |
</figure>
|