Working Qwen3-Reranker GGUFs (0.6B, 4B, 8B) converted with the official convert_hf_to_gguf.py. Most community conversions are broken — missing cls.out
VooDisss
Voodisss
AI & ML interests
Average Joe
Recent Activity
new activity
3 days ago
huggingface/InferenceSupport:unsloth/Qwen3.5-35B-A3B-GGUF new activity
4 days ago
Qwen/Qwen3-Reranker-0.6B:Working GGUF for llama.cpp (native Windows/Linux, no WSL needed) Organizations
None yet