Update README.md
Browse files
README.md
CHANGED
|
@@ -15,7 +15,7 @@ Multi-modal Variational Autoencoder for text embedding transformation using geom
|
|
| 15 |
The current implementation is trained with only a handful of token sequences, so it's essentially front-loaded. Expect short sequences to work.
|
| 16 |
Full-sequence pretraining will begin soon with a uniform vocabulary that takes both tokens in for a representative uniform token based on the position.
|
| 17 |
|
| 18 |
-
Trained specifically to encode and decode a PAIR of encodings, each slightly twisted and warped into the direction of intention from the training. This is not your usual VAE.
|
| 19 |
|
| 20 |

|
| 21 |
|
|
|
|
| 15 |
The current implementation is trained with only a handful of token sequences, so it's essentially front-loaded. Expect short sequences to work.
|
| 16 |
Full-sequence pretraining will begin soon with a uniform vocabulary that takes both tokens in for a representative uniform token based on the position.
|
| 17 |
|
| 18 |
+
Trained specifically to encode and decode a PAIR of encodings, each slightly twisted and warped into the direction of intention from the training. This is not your usual VAE, but she's most definitely trained like one.
|
| 19 |
|
| 20 |

|
| 21 |
|