AbstractPhil commited on
Commit
110f276
·
verified ·
1 Parent(s): 3a15e36

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -0
README.md CHANGED
@@ -15,6 +15,8 @@ Multi-modal Variational Autoencoder for text embedding transformation using geom
15
  The current implementation is trained with only a handful of token sequences, so it's essentially front-loaded. Expect short sequences to work.
16
  Full-sequence pretraining will begin soon with a uniform vocabulary that takes both tokens in for a representative uniform token based on the position.
17
 
 
 
18
  ![image](https://cdn-uploads.huggingface.co/production/uploads/630cf55b15433862cfc9556f/HRRHDhjuBAjTLLeRbyqur.png)
19
 
20
  ![image](https://cdn-uploads.huggingface.co/production/uploads/630cf55b15433862cfc9556f/Ew8Iu9kpCspxwanO6Gi2x.png)
 
15
  The current implementation is trained with only a handful of token sequences, so it's essentially front-loaded. Expect short sequences to work.
16
  Full-sequence pretraining will begin soon with a uniform vocabulary that takes both tokens in for a representative uniform token based on the position.
17
 
18
+ Trained specifically to encode and decode a PAIR of encodings, each slightly twisted and warped into the direction of intention from the training. This is not your usual VAE.
19
+
20
  ![image](https://cdn-uploads.huggingface.co/production/uploads/630cf55b15433862cfc9556f/HRRHDhjuBAjTLLeRbyqur.png)
21
 
22
  ![image](https://cdn-uploads.huggingface.co/production/uploads/630cf55b15433862cfc9556f/Ew8Iu9kpCspxwanO6Gi2x.png)