Ordinary conversion of the Torch-format model fails due to FP8 weights. Please upload a full FP16 GGUF to allow for quantization to desired formats.
Done: https://huggingface.co/N8Programs/Unslopper-GGUF/blob/main/Unslopper-30B-A3B-bf16-bf16.gguf
· Sign up or log in to comment