Hugging Face
Models
Datasets
Spaces
Buckets
new
Docs
Enterprise
Pricing
Log In
Sign Up
majentik
/
gemma-4-31B-RotorQuant-GGUF-Q8_0
like
1
Image-Text-to-Text
GGUF
English
rotorquant
kv-cache-quantization
gemma
gemma4
llama-cpp
quantized
arxiv:
2504.19874
License:
apache-2.0
Model card
Files
Files and versions
xet
Community
Deploy
Use this model
main
gemma-4-31B-RotorQuant-GGUF-Q8_0
32.6 GB
Ctrl+K
Ctrl+K
1 contributor
History:
5 commits
majentik
chore(card): enrich YAML frontmatter (pipeline_tag, language, library_name, inference)
c1a3a39
verified
2 days ago
.gitattributes
Safe
1.59 kB
Add GGUF Q8_0 quantized model
5 days ago
README.md
7.24 kB
chore(card): enrich YAML frontmatter (pipeline_tag, language, library_name, inference)
2 days ago
gemma-4-31B-RotorQuant-Q8_0.gguf
32.6 GB
xet
Add GGUF Q8_0 quantized model
5 days ago