oh
hyunw55
AI & ML interests
None yet
Recent Activity
new activity 15 days ago
olka-fi/Qwen3.5-35B-A3B-MXFP4:Thanks for the quick upload new activity 21 days ago
olka-fi/Qwen3.5-122B-A10B-MXFP4:This quantization model is amzing liked a model 21 days ago
olka-fi/Qwen3.5-122B-A10B-MXFP4Organizations
None yet
Thanks for the quick upload
#1 opened 15 days ago
by
hyunw55
This quantization model is amzing
โค๏ธ๐ 2
4
#1 opened 21 days ago
by
hyunw55
AWQ quantization please
#2 opened 3 months ago
by
hyunw55
Plan for AWQ?
#4 opened 4 months ago
by
hyunw55
Possibilities of QAT model versions?
๐ 3
1
#1 opened 5 months ago
by
Tridefender
Plan for AWQ?
โ 28
3
#8 opened 7 months ago
by
hyunw55
Fails with Unknown CUDA arch error on Dual RTX 3090 with official vLLM image
3
#7 opened 9 months ago
by
hyunw55
AWQ Quantization plz.
๐ 1
#9 opened 9 months ago
by
hyunw55
Game-changer for 4x24GB setups! AWQ request
๐ 3
3
#1 opened 10 months ago
by
hyunw55
Please, share the custom vLLM source you made
๐ 1
#11 opened 11 months ago
by
hyunw55
AWQ quantized model support timeline?
๐ 8
2
#12 opened 11 months ago
by
hyunw55
GPTQ or AWQ Quants
๐ 1
3
#12 opened 11 months ago
by
guialfaro
GPTQ/AWQ
๐ 14
4
#3 opened 11 months ago
by
ndurkee
Any Plans for GLM-4-Z1 and Rumination Models in Dynamic v2.0 GGUF?
๐ 1
#2 opened 11 months ago
by
hyunw55
How to properly run EXAONE-Deep-32B-AWQ with vLLM?
๐ 1
2
#1 opened about 1 year ago
by
hyunw55
R1 distill to Mistral Small?
โค๏ธ 10
4
#99 opened about 1 year ago
by
nfunctor