Gemma 4 E4B Instruct Parity MLX (8bit)
This repository contains the canonical MLX side of the meshllm Gemma 4 parity pair.
- Source checkpoint:
google/gemma-4-E4B-it - Conversion path: original checkpoint -> MLX ->
8bit - Intended use: backend parity testing against the matching GGUF artifact
This artifact is not meant to be a generic Gemma 4 community release. It exists so that MLX and GGUF can be compared from the same original model lineage with minimal converter mismatch.
Canonical pair:
- GGUF:
meshllm/gemma-4-e4b-it-parity-q8_0-gguf - MLX:
meshllm/gemma-4-e4b-it-parity-8bit-mlx
Latest trusted exact result:
| Backend | Model | Exact |
|---|---|---|
| GGUF | meshllm/gemma-4-e4b-it-parity-q8_0-gguf/gemma-4-e4b-it-q8_0.gguf |
PASS |
| MLX | this repo | PASS |
Prompt comparison from local same-origin validation:
| Prompt | GGUF Q8_0 | MLX 8bit |
|---|---|---|
primary |
blue |
blue |
alt-green |
green |
green |
alt-red |
red |
red |
capital-france |
Paris |
Paris |
primary-colors |
red, green, blue |
red, green, blue |
two-plus-two |
4 |
4 |
largest-planet |
Jupiter |
Jupiter |
breathing-gas |
Oxygen |
Oxygen |
opposite-hot |
Cold |
Cold |
banana-color |
Yellow |
Yellow |
after-monday |
Tuesday |
Tuesday |
- Downloads last month
- 551
Model size
8B params
Tensor type
BF16
·
U32 ·
Hardware compatibility
Log In to add your hardware
8-bit
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support
Model tree for meshllm/gemma-4-e4b-it-parity-8bit-mlx
Base model
google/gemma-4-E4B-it