Hugging Face
Models
Datasets
Spaces
Community
Docs
Enterprise
Pricing
Log In
Sign Up
ubergarm
/
Qwen3-Coder-Next-GGUF
like
8
Text Generation
GGUF
imatrix
conversational
qwen3_next
ik_llama.cpp
License:
apache-2.0
Model card
Files
Files and versions
xet
Community
3
Deploy
Use this model
main
Qwen3-Coder-Next-GGUF
120 GB
1 contributor
History:
10 commits
ubergarm
uploaded new smallest IQ1_KT quant for 24GB GPUs
c09edbb
about 18 hours ago
images
upload best quant for full 24GB offload IQ1_KT
about 18 hours ago
logs
initial commit
4 days ago
.gitattributes
Safe
1.65 kB
initial commit
4 days ago
Qwen3-Coder-Next-IQ1_KT.gguf
20.5 GB
xet
Upload Qwen3-Coder-Next-IQ1_KT.gguf with huggingface_hub
about 18 hours ago
Qwen3-Coder-Next-IQ4_KSS.gguf
42.3 GB
xet
Upload Qwen3-Coder-Next-IQ4_KSS.gguf with huggingface_hub
4 days ago
Qwen3-Coder-Next-smol-IQ2_KS.gguf
23.7 GB
xet
Upload Qwen3-Coder-Next-smol-IQ2_KS.gguf with huggingface_hub
4 days ago
Qwen3-Coder-Next-smol-IQ3_KS.gguf
33 GB
xet
Upload Qwen3-Coder-Next-smol-IQ3_KS.gguf with huggingface_hub
3 days ago
README.md
9.64 kB
uploaded new smallest IQ1_KT quant for 24GB GPUs
about 18 hours ago
imatrix-Qwen3-Coder-Next-BF16.dat
457 MB
xet
Upload imatrix-Qwen3-Coder-Next-BF16.dat with huggingface_hub
4 days ago