Update README.md
Browse files
README.md
CHANGED
|
@@ -2,4 +2,6 @@
|
|
| 2 |
license: apache-2.0
|
| 3 |
---
|
| 4 |
|
| 5 |
-
This the GGUF models are quantized from [ibm-granite/granite-4.0-tiny-base-preview](https://huggingface.co/ibm-granite/granite-4.0-tiny-base-preview)
|
|
|
|
|
|
|
|
|
| 2 |
license: apache-2.0
|
| 3 |
---
|
| 4 |
|
| 5 |
+
This the GGUF models are quantized from [ibm-granite/granite-4.0-tiny-base-preview](https://huggingface.co/ibm-granite/granite-4.0-tiny-base-preview)
|
| 6 |
+
|
| 7 |
+
Granite-4.0-Tiny-Base-Preview is a 7B-parameter **hybrid mixture-of-experts (MoE)** language model featuring a 128k token context window. The architecture leverages **Mamba-2**, superimposed with a softmax attention for enhanced expressiveness, with **no positional encoding** for better length generalization.
|