AbstractPhil commited on
Commit
f363fc8
·
verified ·
1 Parent(s): 110f276

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -15,7 +15,7 @@ Multi-modal Variational Autoencoder for text embedding transformation using geom
15
  The current implementation is trained with only a handful of token sequences, so it's essentially front-loaded. Expect short sequences to work.
16
  Full-sequence pretraining will begin soon with a uniform vocabulary that takes both tokens in for a representative uniform token based on the position.
17
 
18
- Trained specifically to encode and decode a PAIR of encodings, each slightly twisted and warped into the direction of intention from the training. This is not your usual VAE.
19
 
20
  ![image](https://cdn-uploads.huggingface.co/production/uploads/630cf55b15433862cfc9556f/HRRHDhjuBAjTLLeRbyqur.png)
21
 
 
15
  The current implementation is trained with only a handful of token sequences, so it's essentially front-loaded. Expect short sequences to work.
16
  Full-sequence pretraining will begin soon with a uniform vocabulary that takes both tokens in for a representative uniform token based on the position.
17
 
18
+ Trained specifically to encode and decode a PAIR of encodings, each slightly twisted and warped into the direction of intention from the training. This is not your usual VAE, but she's most definitely trained like one.
19
 
20
  ![image](https://cdn-uploads.huggingface.co/production/uploads/630cf55b15433862cfc9556f/HRRHDhjuBAjTLLeRbyqur.png)
21