Hugging Face's logo Hugging Face
  • Models
  • Datasets
  • Spaces
  • Docs
  • Enterprise
  • Pricing

  • Log In
  • Sign Up

wredd
/
medgemma-4b-gguf

GGUF
medical
quantized
llama.cpp
african-healthcare
imatrix
conversational
Model card Files Files and versions
xet
Community
medgemma-4b-gguf
6.82 GB
  • 1 contributor
History: 7 commits
wredd's picture
wredd
Add MedGemma IQ1_M - Extreme compression with African healthcare imatrix (~0.7GB)
e104e89 verified 5 days ago
  • .gitattributes
    1.76 kB
    Add MedGemma IQ1_M - Extreme compression with African healthcare imatrix (~0.7GB) 5 days ago
  • README.md
    1.7 kB
    Update model card with IQ2_XS and African healthcare focus 5 days ago
  • medgemma-4b-iq1_m.gguf
    1.2 GB
    xet
    Add MedGemma IQ1_M - Extreme compression with African healthcare imatrix (~0.7GB) 5 days ago
  • medgemma-4b-iq2_xs.gguf
    1.4 GB
    xet
    Add MedGemma 4B IQ2_XS - Ultra-compressed with African medical imatrix (~0.9GB) 5 days ago
  • medgemma-4b-q2_k.gguf
    1.73 GB
    xet
    Add MedGemma 4B Q2_K (2-bit) for budget Android phones 6 days ago
  • medgemma-4b-q4_k_m.gguf
    2.49 GB
    xet
    Add MedGemma 4B Q4_K_M (INT4) for standard devices 6 days ago