Update README.md
Browse files
README.md
CHANGED
|
@@ -11,7 +11,7 @@ Architecture details:
|
|
| 11 |
2. MoE FFNs have GeGLU architecture, with inner/gate dim of 1024. The model's hidden dim is 2048.
|
| 12 |
3. Usable context length increased to 32K, with [a custom efficient SuperBPE tokenizer](https://huggingface.co/moondream/starmie-v1)
|
| 13 |
4. Multi-headed attention with learned position- and data-dependent temperature scaling
|
| 14 |
-
5.
|
| 15 |
|
| 16 |
For more details, please refer to our ||coming soon release blog post||. Or try the model out in our [playground demo](https://moondream.ai/c/playground).
|
| 17 |
|
|
|
|
| 11 |
2. MoE FFNs have GeGLU architecture, with inner/gate dim of 1024. The model's hidden dim is 2048.
|
| 12 |
3. Usable context length increased to 32K, with [a custom efficient SuperBPE tokenizer](https://huggingface.co/moondream/starmie-v1)
|
| 13 |
4. Multi-headed attention with learned position- and data-dependent temperature scaling
|
| 14 |
+
5. SigLIP-based vision encoder, with multi-crop channel concatenation for token-efficient high resolution image processing
|
| 15 |
|
| 16 |
For more details, please refer to our ||coming soon release blog post||. Or try the model out in our [playground demo](https://moondream.ai/c/playground).
|
| 17 |
|