Hugging Face's logo Hugging Face
  • Models
  • Datasets
  • Spaces
  • Buckets new
  • Docs
  • Enterprise
  • Pricing

  • Log In
  • Sign Up

legraphista
/
granite-20b-code-instruct-IMat-GGUF

Text Generation
GGUF
code
granite
quantized
GGUF
quantization
imat
imatrix
static
16bit
8bit
6bit
5bit
4bit
3bit
2bit
1bit
Eval Results (legacy)
conversational
Model card Files Files and versions
xet
Community
granite-20b-code-instruct-IMat-GGUF
52.9 GB
Ctrl+K
Ctrl+K
  • 1 contributor
History: 11 commits
legraphista's picture
legraphista
Upload granite-20b-code-instruct.Q5_K.gguf with huggingface_hub
feceb8d verified over 1 year ago
  • .gitattributes
    1.78 kB
    Upload granite-20b-code-instruct.Q5_K.gguf with huggingface_hub over 1 year ago
  • README.md
    11 kB
    Upload README.md with huggingface_hub over 1 year ago
  • granite-20b-code-instruct.Q5_K.gguf
    14.8 GB
    xet
    Upload granite-20b-code-instruct.Q5_K.gguf with huggingface_hub over 1 year ago
  • granite-20b-code-instruct.Q6_K.gguf
    16.6 GB
    xet
    Upload granite-20b-code-instruct.Q6_K.gguf with huggingface_hub over 1 year ago
  • granite-20b-code-instruct.Q8_0.gguf
    21.5 GB
    xet
    Upload granite-20b-code-instruct.Q8_0.gguf with huggingface_hub over 1 year ago
  • imatrix.dat
    8.95 MB
    xet
    Upload imatrix.dat with huggingface_hub over 1 year ago
  • imatrix.dataset
    280 kB
    Upload imatrix.dataset with huggingface_hub over 1 year ago
  • imatrix.log
    12.2 kB
    Upload imatrix.log with huggingface_hub over 1 year ago