Quantized model creation request: medgemma3-thinking

#1394
by testamentaddress01 - opened
This comment has been hidden (marked as Resolved)

This model is a bit of a pain as it doesn't follow the standarized SafeTensor repository structure. I will need to manualy conveart it into a GGUF.

I assume you want the medgemma3-thinking and not megdgmma3-thinking-DirectLoRA version of it.

This comment has been hidden (marked as Resolved)
This comment has been hidden (marked as Resolved)

We can only quantize one model per upstream repository due to the way ouer systems works. If you want booth either ask the author to seperate tghem or clone them using https://huggingface.co/spaces/huggingface-projects/repo_duplicator

@testamentaddress01 It's queued and already almost done! :D

You can check for progress at http://hf.tst.eu/status.html or regularly check the model
summary page at https://hf.tst.eu/model#medgemma3-thinking-GGUF for quants to appear.

Static quants: https://huggingface.co/mradermacher/medgemma3-thinking-GGUF
Weighted/imatrix quants: https://huggingface.co/mradermacher/medgemma3-thinking-i1-GGUF

Sign up or log in to comment