Hugging Face
Models
Datasets
Spaces
Buckets
new
Docs
Enterprise
Pricing
Log In
Sign Up
Qwen
/
Qwen2.5-Coder-1.5B-Instruct-GGUF
like
45
Follow
Qwen
80.6k
Text Generation
Transformers
GGUF
English
code
codeqwen
chat
qwen
qwen-coder
conversational
arxiv:
2409.12186
arxiv:
2407.10671
License:
apache-2.0
Model card
Files
Files and versions
xet
Community
3
Deploy
Use this model
perf: switch to 1.5B Q2_K quantization for lowest possible latency on CPU
#3
by
scriptsledge
- opened
Dec 20, 2025
base:
refs/heads/main
←
from:
refs/pr/3
Discussion
Files changed
+294
-0
scriptsledge
Dec 20, 2025
No description provided.
perf: switch to 1.5B Q2_K quantization for lowest possible latency on CPU
009eb6a8
scriptsledge
changed pull request status to
closed
Dec 20, 2025
Edit
Preview
Upload images, audio, and videos by dragging in the text input, pasting, or
clicking here
.
Tap or paste here to upload images
Comment
·
Sign up
or
log in
to comment