Gemma 2 2B parity MLX (8bit)
Same-origin MLX parity artifact for google/gemma-2-2b-it, produced for backend comparison work in mesh-llm.
- Source checkpoint:
google/gemma-2-2b-it - Conversion flow: original checkpoint -> MLX 8bit
- Intended pair:
meshllm/gemma-2-2b-it-parity-q8_0-gguf
This repo is for backend-parity testing rather than for claiming best overall model quality.
- Downloads last month
- 39
Model size
0.7B params
Tensor type
BF16
·
U32 ·
Hardware compatibility
Log In to add your hardware
8-bit
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support