How to use from
Docker Model Runner
docker model run hf.co/codemajesty/gemma2b-4bit-quantized
Quick Links
README.md exists but content is empty.
Downloads last month
5
Safetensors
Model size
3B params
Tensor type
F32
F16
U8
Inference Providers NEW
This model isn't deployed by any Inference Provider. 馃檵 Ask for provider support

Model tree for codemajesty/gemma2b-4bit-quantized

Base model

google/gemma-2b
Quantized
(35)
this model