Gemma 3 1B IT Parity MLX BF16

This repo contains a same-origin MLX bf16 conversion of google/gemma-3-1b-it, published by meshllm for backend parity testing.

Purpose

This artifact is intended to be paired with meshllm/gemma-3-1b-it-parity-f16-gguf when validating MLX versus GGUF behavior in mesh-llm.

It is not positioned as a generally best community artifact. The goal is a cleaner apples-to-apples parity pair derived from the same original checkpoint.

Source

  • Upstream checkpoint: google/gemma-3-1b-it
  • Converted from the original Hugging Face checkpoint by mesh-llm

Validation

Validated on studio54.local against the paired GGUF artifact with the mesh-llm exact prompt suite.

Prompt GGUF f16 MLX bf16
primary blue blue
alt-green green green
alt-red red red
capital-france Paris Paris
primary-colors Red, Green, Blue Red, Green, Blue
two-plus-two 2 + 2 = 4 2 + 2 = 4
largest-planet Jupiter Jupiter
breathing-gas Oxygen Oxygen
opposite-hot Cold Cold
banana-color Yellow Yellow
after-monday Tuesday Tuesday

Summary:

  • GGUF exact: PASS
  • MLX exact: PASS
  • parity: MATCH

Files

  • model.safetensors
  • model.safetensors.index.json
  • tokenizer and config files

Notes

This repo exists to support reproducible parity testing in mesh-llm.

Downloads last month
236
Safetensors
Model size
1.0B params
Tensor type
BF16
·
MLX
Hardware compatibility
Log In to add your hardware

Quantized

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for meshllm/gemma-3-1b-it-parity-bf16-mlx

Finetuned
(513)
this model