Instructions to use splats/gemma-4-31B-it-oQ4e with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- MLX
How to use splats/gemma-4-31B-it-oQ4e with MLX:
# Download the model from the Hub pip install huggingface_hub[hf_xet] huggingface-cli download --local-dir gemma-4-31B-it-oQ4e splats/gemma-4-31B-it-oQ4e
- Notebooks
- Google Colab
- Kaggle
- Local Apps
- LM Studio
gemma-4-31B-it-oQ4e
This model was quantized using oQ (oMLX v0.3.5) mixed-precision quantization.
Quantization details
- Model type: gemma4
- Bits: 4
- Group size: 64
- Format: MLX safetensors
- Downloads last month
- 161
Model size
6B params
Tensor type
BF16
·
U32 ·
Hardware compatibility
Log In to add your hardware
4-bit
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support