Quantized model creation request: medgemma3-thinking
This model is a bit of a pain as it doesn't follow the standarized SafeTensor repository structure. I will need to manualy conveart it into a GGUF.
I assume you want the medgemma3-thinking and not megdgmma3-thinking-DirectLoRA version of it.
We can only quantize one model per upstream repository due to the way ouer systems works. If you want booth either ask the author to seperate tghem or clone them using https://huggingface.co/spaces/huggingface-projects/repo_duplicator
@testamentaddress01 It's queued and already almost done! :D
You can check for progress at http://hf.tst.eu/status.html or regularly check the model
summary page at https://hf.tst.eu/model#medgemma3-thinking-GGUF for quants to appear.
Static quants: https://huggingface.co/mradermacher/medgemma3-thinking-GGUF
Weighted/imatrix quants: https://huggingface.co/mradermacher/medgemma3-thinking-i1-GGUF