Works great! Thannk you!
I'm testing it now on a G292-Z20 with 8x A2000 12GB, tensor parallel 8 and 131072 context lenght deliver up to 50 tps in generation, very nice local coding model for even some level of complexity (Rust Dioxus), it can manage complex tasks quite nicely!
Thanks!
Yeah, the same. THX guy.!
I'm testing it now on a G292-Z20 with 8x A2000 12GB, tensor parallel 8 and 131072 context lenght deliver up to 50 tps in generation, very nice local coding model for even some level of complexity (Rust Dioxus), it can manage complex tasks quite nicely!
Thanks!
I notice it seems to do a lot of something in the background. When it is outputting text it is fast, and even PP is fast, but it sits and processes something for a long time before outputting text. This is the same issue I noticed with the last "next" model they put out. You mind posting your serve command?
This is probably a model-specific issue with vLLM. Qwen3 Coder Next (and the earlier Next model) can't use prefix-caching. You can verify it with a long conversation; see if the model gets progressively slower.
The issue for the original model:
https://github.com/vllm-project/vllm/issues/25874
Potential fix for the Coder Next model as well here?
https://github.com/vllm-project/vllm/pull/26807
Please let me know if it works, I think it may still be unsolved.