Hugging Face's logo Hugging Face
  • Models
  • Datasets
  • Spaces
  • Docs
  • Enterprise
  • Pricing

  • Log In
  • Sign Up

bartowski
/
mamba-2.8b-hf-GGUF

Text Generation
Transformers
GGUF
Model card Files Files and versions
xet
Community
mamba-2.8b-hf-GGUF
32.8 GB
  • 1 contributor
History: 2 commits
bartowski's picture
bartowski
Llamacpp quants
479752a verified almost 2 years ago
  • .gitattributes
    2.5 kB
    Llamacpp quants almost 2 years ago
  • README.md
    3.62 kB
    Llamacpp quants almost 2 years ago
  • mamba-2.8b-hf-IQ3_M.gguf
    1.68 GB
    xet
    Llamacpp quants almost 2 years ago
  • mamba-2.8b-hf-IQ3_S.gguf
    1.68 GB
    xet
    Llamacpp quants almost 2 years ago
  • mamba-2.8b-hf-IQ4_NL.gguf
    2.02 GB
    xet
    Llamacpp quants almost 2 years ago
  • mamba-2.8b-hf-IQ4_XS.gguf
    1.94 GB
    xet
    Llamacpp quants almost 2 years ago
  • mamba-2.8b-hf-Q2_K.gguf
    1.43 GB
    xet
    Llamacpp quants almost 2 years ago
  • mamba-2.8b-hf-Q3_K_L.gguf
    1.68 GB
    xet
    Llamacpp quants almost 2 years ago
  • mamba-2.8b-hf-Q3_K_M.gguf
    1.68 GB
    xet
    Llamacpp quants almost 2 years ago
  • mamba-2.8b-hf-Q3_K_S.gguf
    1.68 GB
    xet
    Llamacpp quants almost 2 years ago
  • mamba-2.8b-hf-Q4_0.gguf
    2.02 GB
    xet
    Llamacpp quants almost 2 years ago
  • mamba-2.8b-hf-Q4_K_M.gguf
    2.02 GB
    xet
    Llamacpp quants almost 2 years ago
  • mamba-2.8b-hf-Q4_K_S.gguf
    2.02 GB
    xet
    Llamacpp quants almost 2 years ago
  • mamba-2.8b-hf-Q5_0.gguf
    2.33 GB
    xet
    Llamacpp quants almost 2 years ago
  • mamba-2.8b-hf-Q5_K_M.gguf
    2.33 GB
    xet
    Llamacpp quants almost 2 years ago
  • mamba-2.8b-hf-Q5_K_S.gguf
    2.33 GB
    xet
    Llamacpp quants almost 2 years ago
  • mamba-2.8b-hf-Q6_K.gguf
    2.66 GB
    xet
    Llamacpp quants almost 2 years ago
  • mamba-2.8b-hf-Q8_0.gguf
    3.3 GB
    xet
    Llamacpp quants almost 2 years ago