Update README.md
Browse files
README.md
CHANGED
|
@@ -25,6 +25,11 @@ I used the 32 dim; as it seemed to be the weakest with flow-match euler-discreet
|
|
| 25 |
|
| 26 |
The output model is much larger than I wanted; which defeats the purpose of the overall structure - but it's paired directly at the knee with clip-vit-base-patch32, so I'll prepare a decoupled version here in a bit.
|
| 27 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 28 |
# Notebook-6 · Crystal-CLIP CIFAR-100
|
| 29 |
|
| 30 |
One-vector image embeddings (HF CLIP) + pentachora vocabulary anchors → cosine-similarity classifier for CIFAR-100.
|
|
|
|
| 25 |
|
| 26 |
The output model is much larger than I wanted; which defeats the purpose of the overall structure - but it's paired directly at the knee with clip-vit-base-patch32, so I'll prepare a decoupled version here in a bit.
|
| 27 |
|
| 28 |
+
|
| 29 |
+
# Why clip-vit instead of just vit?
|
| 30 |
+
|
| 31 |
+
I believe the clip-vit variations have more utility overall so I wanted to ensure a fair target was assessed.
|
| 32 |
+
|
| 33 |
# Notebook-6 · Crystal-CLIP CIFAR-100
|
| 34 |
|
| 35 |
One-vector image embeddings (HF CLIP) + pentachora vocabulary anchors → cosine-similarity classifier for CIFAR-100.
|