Akicou's picture
Upload README.md with huggingface_hub
921a9d8 verified
metadata
tags:
  - gguf
  - llama.cpp
  - quantization
base_model: Qwen/Qwen3-Coder-Next

Qwen3-Coder-Next-GGUF

This model was converted to GGUF format from Qwen/Qwen3-Coder-Next using GGUF Forge.

Quants

The following quants are available: Q3_K_S, Q2_K, Q3_K_M, Q3_K_L, Q4_0, Q4_K_S, Q4_K_M, Q5_0, Q5_K_S, Q5_K_M, Q6_K, Q8_0

Ollama Support

Full Ollama support is provided by merging any sharded GGUF output into a single file after quantization.

Conversion Stats

Metric Value
Job ID 110acb9f-02d0-4c49-9f12-73ac6e47e8f1
GGUF Forge Version v5.8
Total Time 4.4h
Avg Time per Quant 27.2min

Step Breakdown

  • Download: 21.2min
  • FP16 Conversion: 20.6min
  • Quantization: 3.7h

πŸš€ Convert Your Own Models

Want to convert more models to GGUF?

πŸ‘‰ gguforge.com β€” Free hosted GGUF conversion service. Login with HuggingFace and request conversions instantly!

Links

  • 🌐 Free Hosted Service: gguforge.com
  • πŸ› οΈ Self-host GGUF Forge: GitHub
  • πŸ“¦ llama.cpp (quantization engine): GitHub
  • πŸ’¬ Community & Support: Discord

Converted automatically by GGUF Forge v5.8