GGUF

This the GGUF models are quantized from ibm-granite/granite-4.0-tiny-base-preview

Granite-4.0-Tiny-Base-Preview is a 7B-parameter hybrid mixture-of-experts (MoE) language model featuring a 128k token context window. The architecture leverages Mamba-2, superimposed with a softmax attention for enhanced expressiveness, with no positional encoding for better length generalization.

Downloads last month
9
GGUF
Model size
7B params
Architecture
granitehybrid
Hardware compatibility
Log In to view the estimation

3-bit

4-bit

5-bit

6-bit

8-bit

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Collection including skymizer/granite-4.0-tiny-base-preview-GGUF