update readme links
Browse files
README.md
CHANGED
|
@@ -5,9 +5,9 @@ license: mit
|
|
| 5 |
# TempVerseFormer - Pre-trained Models
|
| 6 |
|
| 7 |
[](https://huggingface.co/LKyluk/TempVerseFormer)
|
| 8 |
-
[:** A standard Vanilla Transformer architecture with temporal chaining, serving as a baseline to compare against TempVerseFormer.
|
| 21 |
-
* **TempVerseFormer (Rev-Transformer):**
|
| 22 |
* **Standard Transformer (Pipe-Transformer):** A standard Transformer model that predicts only one next element at once.
|
| 23 |
* **LSTM:** A Long Short-Term Memory network, representing a traditional recurrent sequence modeling approach.
|
| 24 |
* **VAE Models:** Variational Autoencoder (VAE) models used for encoding and decoding images to and from a latent space:
|
|
|
|
| 5 |
# TempVerseFormer - Pre-trained Models
|
| 6 |
|
| 7 |
[](https://huggingface.co/LKyluk/TempVerseFormer)
|
| 8 |
+
[](https://github.com/leo27heady/TempVerseFormer)
|
| 9 |
+
[](https://github.com/leo27heady/simple-shape-dataset-toolbox)
|
| 10 |
+
[](https://wandb.ai/leo27heady/pipe-transformer/reports/TempVerseFormer-Training-Logs--VmlldzoxMTg3OTQ3NQ)
|
| 11 |
|
| 12 |
This repository hosts pre-trained models for **TempVerseFormer: Temporal Modeling with Reversible Transformers**, a novel architecture introduced in the research article **"Temporal Modeling with Reversible Transformers"**.
|
| 13 |
|
|
|
|
| 18 |
This repository contains pre-trained weights for the following models, as described in the research article:
|
| 19 |
|
| 20 |
* **TempFormer (Vanilla-Transformer):** A standard Vanilla Transformer architecture with temporal chaining, serving as a baseline to compare against TempVerseFormer.
|
| 21 |
+
* **TempVerseFormer (Rev-Transformer):** The core Reversible Temporal Transformer architecture, leveraging reversible blocks and time-agnostic backpropagation for memory efficiency.
|
| 22 |
* **Standard Transformer (Pipe-Transformer):** A standard Transformer model that predicts only one next element at once.
|
| 23 |
* **LSTM:** A Long Short-Term Memory network, representing a traditional recurrent sequence modeling approach.
|
| 24 |
* **VAE Models:** Variational Autoencoder (VAE) models used for encoding and decoding images to and from a latent space:
|